天鹅座不是作为服务开始的

时间:2015-08-18 14:46:43

标签: fiware fiware-cygnus

我一直在检查其他人关于天鹅座配置文件的问题,但我仍然无法让我的工作。

用"服务天鹅座开始"开始天鹅座失败。

当我尝试启动服务时,/ var / log / cygnus / cygnus.log中的日志显示:

Warning: JAVA_HOME is not set!
+ exec /usr/bin/java -Xmx20m -Dflume.log.file=cygnus.log -cp '/usr/cygnus/conf:/usr/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/lib/*:/usr/cygnus/plugins.d/cygnus/libext/*' -Djava.library.path= com.telefonica.iot.cygnus.nodes.CygnusApplication -p 8081 -f /usr/cygnus/conf/agent_1.conf -n cygnusagent
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/cygnus/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/cygnus/plugins.d/cygnus/lib/cygnus-0.8.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: ./logs/cygnus.log (No such file or directory)
    at java.io.FileOutputStream.openAppend(Native Method)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:210)
    at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
    at org.apache.log4j.FileAppender.setFile(FileAppender.java:294)
    at org.apache.log4j.RollingFileAppender.setFile(RollingFileAppender.java:207)
    at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:165)
    at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:307)
    at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:172)
    at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:104)
    at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:809)
    at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:735)
    at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:615)
    at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:502)
    at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:547)
    at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:483)
    at org.apache.log4j.LogManager.<clinit>(LogManager.java:127)
    at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:242)
    at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:254)
    at org.apache.flume.node.Application.<clinit>(Application.java:58)
Starting an ordered shutdown of Cygnus
Stopping sources
All the channels are empty
Stopping channels
Stopping hdfs-channel (lyfecycle state=START)
Stopping sinks
Stopping hdfs-sink (lyfecycle state=START)

JAVA_HOME已设置,我认为问题在于配置文件:

agent_1.conf:

cygnusagent.sources = http-source
cygnusagent.sinks = hdfs-sink 
cygnusagent.channels = hdfs-channel

#=============================================
 # source configuration
 # channel name where to write the notification events
cygnusagent.sources.http-source.channels = hdfs-channel
 # source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
 # listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
 # Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
 # URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
 # Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
 # Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
 # Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
 # Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
 # TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
 # GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
 # Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
 # See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf

# ============================================
 # OrionHDFSSink configuration
 # channel name from where to read notification events
cygnusagent.sinks.hdfs-sink.channel = hdfs-channel
 # sink class, must not be changed
cygnusagent.sinks.hdfs-sink.type = com.telefonica.iot.cygnus.sinks.OrionHDFSSink
 # Comma-separated list of FQDN/IP address regarding the HDFS Namenode endpoints
 # If you are using Kerberos authentication, then the usage of FQDNs instead of IP addresses is mandatory
cygnusagent.sinks.hdfs-sink.hdfs_host = cosmos.lab.fiware.org
 # port of the HDFS service listening for persistence operations; 14000 for httpfs, 50070 for webhdfs
cygnusagent.sinks.hdfs-sink.hdfs_port = 14000
 # username allowed to write in HDFS
cygnusagent.sinks.hdfs-sink.hdfs_username = MYUSERNAME
 # OAuth2 token
cygnusagent.sinks.hdfs-sink.oauth2_token = MYTOKEN
 # how the attributes are stored, either per row either per column (row, column)
cygnusagent.sinks.hdfs-sink.attr_persistence = column
 # Hive FQDN/IP address of the Hive server
cygnusagent.sinks.hdfs-sink.hive_host = cosmos.lab.fiware.org
 # Hive port for Hive external table provisioning
cygnusagent.sinks.hdfs-sink.hive_port = 10000
 # Kerberos-based authentication enabling
cygnusagent.sinks.hdfs-sink.krb5_auth = false
 # Kerberos username
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_user = krb5_username
 # Kerberos password
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_password = xxxxxxxxxxxxx
 # Kerberos login file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_login_conf_file = /usr/cygnus/conf/krb5_login.conf
 # Kerberos configuration file
cygnusagent.sinks.hdfs-sink.krb5_auth.krb5_conf_file = /usr/cygnus/conf/krb5.conf

#=============================================
 # hdfs-channel configuration
 # channel type (must not be changed)
cygnusagent.channels.hdfs-channel.type = memory
 # capacity of the channel
cygnusagent.channels.hdfs-channel.capacity = 1000
 # amount of bytes that can be sent per transaction
cygnusagent.channels.hdfs-channel.transactionCapacity = 100

和cygnus_instance_1.conf:

CYGNUS_USER=cygnus

CONFIG_FOLDER=/usr/cygnus/conf

CONFIG_FILE=/usr/cygnus/conf/agent_1.conf

# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters 
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent

# Name of the logfile located at /var/log/cygnus.
LOGFILE_NAME=cygnus.log

# Administration port. Must be unique per instance
ADMIN_PORT=8081

# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30

我希望这是一个简单的问题。如果需要更多信息,请告诉我。

顺便说一下,我按照this link上的说明获得了我的令牌。 是不是应该有一个用于访问COSMOS全局实例的密码字段?或者是令牌足够吗?

谢谢

2 个答案:

答案 0 :(得分:2)

虽然长时间没有在cygnus上工作太多,但你提到的问题看起来就像启动Cygnus时你的应用程序无法找到服务无法启动的日志目录。 {这是配置问题。}

为了避免这种情况,你可以执行一些可能有帮助的步骤。

  1. 如果日志目录也可访问,则从cygnus的主目录启动cygnus服务。 例如,假设你的天鹅座的主目录是&#34; / usr / local / cygnus&#34;并记录&#34; / usr / local / cygnus / logs /&#34;然后从cygnus主目录启动cygnus服务吧 &#34; sh / usr / local / cygnus / bin / cygnus start&#34;,这些将起作用,因为日志目录将由cygnus访问&#34; ./ log / cygnus.log&#34; < /强>

  2. 在./~bashprofile中添加Cygnus Home并更新do导出,因此这些将为Cygnus主目录设置Classpath,这将有助于访问日志位置,以便您可以使用&#34;服务天鹅座开始&#34;。

  3. 通过提及日志位置的完整路径来更新天鹅记录的配置例如&#34; / usr / local / cygnus / logs /&#34;并开始服务。

答案 1 :(得分:1)

由于我不得不再次解决这个问题,我将借此机会说出我是如何做到的。

基本上log4j需要配置。 在cygnus存储库的安装和配置说明的this部分,您可以找到有关它的更多详细信息。

在我的情况下,我只使用一个cygnus实例写入HDFS。为了让cygnus运行良好,我在 CYGNUS_PATH / conf / log4j.properties 上编辑了以下一行到一个绝对路径,其中有一个 cygnus.log 文件:

flume.log.dir=/usr/cygnus/logs

其中/ user / cygnus是我的CYGNUS_PATH。

所以现在log4j会将cygnus的操作写入 /usr/cygnus/logs/cygnus.log 。 当然,只要您在此属性文件中正确引用日志文件,就可以随意将日志文件放在任何位置。

希望这有助于任何有类似问题的人。