By Juergenkress-Oracle on Dec 06, 2014
I have read the blog of Mark (http://www.rittmanmead.com) about using flume and hdfs. It sounds very easy and my idea was to integrate it as an extention to Oracle Cloud Control to have an extended tool for detailed log file monitoring and analytic. Here is my experience about it:
Download and Installation
I downloaded all files here and extracted it. I set my java home and started the flume on agent side, configs like Mark’s Blog. On the server side, I used Cloudera Express Edition on one node. Please set dfs replication to 1, if you use only one node
for testing. (/etc/hadoop/conf/hdfs-site.xml) I installed it without flume. I made the tests with starting it from shell, using standard version 184.108.40.206. Be aware to use a user, which is configured inside your HDFS. If not, you can’t login to HDFS or you try kerberos authentication …
Here the config for an managed server log file:
Log file rotation on weblogic
above configuration works good, if you have no rotation on log files.
But what happens, if weblogic rotates the log files? In my case the
flume agents stopped to get more data. In flume documentation you also
find the possibility to spool folders. For this the archived, rotated
log files must be in a seperate folder. If you have all in one, you get
the errors as described. So I changed the log configuration in wls –
here a managed server of SOA Suite. I added a seperate log rotation
directory and disabled auto delete of files. I used a soa server on
220.127.116.11. Read the complete article here.
For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center.