IT博客汇
  • 首页
  • 精华
  • 技术
  • 设计
  • 资讯
  • 扯淡
  • 权利声明
  • 登录 注册

    [原]Java及Web程序调用hadoop2.6

    fansy1990发表于 2015-01-11 01:11:42
    love 0

    1. hadoop集群:

    1.1 系统及硬件配置:

    hadoop版本:2.6 ;三台虚拟机:node101(192.168.0.101)、node102(192.168.0.102)、node103(192.168.0.103); 每台机器2G内存、1个CPU核;

    node101: NodeManager、 NameNode、ResourceManager、DataNode;

    node102: NodeManager、DataNode 、SecondaryNameNode、JobHistoryServer;

    node103: NodeManager 、DataNode;

    1.2 配置过程中遇到的问题:

    1) NodeManager启动不了;

    最开始配置的虚拟机配置的是512M内存,所以在yarn-site.xml 中的“yarn.nodemanager.resource.memory-mb”配置为512(其默认配置是1024),查看日志,报错:

    org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager from  node101 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
    把它改为1024或者以上就可以正常启动NodeManager了,我设置的是2048;

    2) 任务可以提交,但是不会继续运行

    a. 由于这里每个虚拟机只配置了一个核,但是yarn-site.xml里面的“yarn.nodemanager.resource.cpu-vcores”默认配置是8,这样在分配资源的时候会有问题,所以把这个参数配置为1;

    b. 出现下面的错误:

    is running beyond virtual memory limits. Current usage: 96.6 MB of 1.5 GB physical memory used; 1.6 GB of 1.5 GB virtual memory used. Killing container.
    这个应该是map、reduce、NodeManager的资源配置没有配置好,大小配置不正确导致的,但是我改了好久,感觉应该是没问题的,但是一直报这个错,最后没办法,把这个检查去掉了,即把yarn-site.xml 中的“yarn.nodemanager.vmem-check-enabled”配置为false;这样就可以提交任务了。

    1.3 配置文件(希望有高人可以指点下资源配置情况,可以不出现上面b的错误,而不是使用去掉检查的方法):

    1)hadoop-env.sh 和yarn-env.sh 中配置jdk,同时HADOOP_HEAPSIZE和YARN_HEAPSIZE配置为512;

    2)hdfs-site.xml 配置数据存储路径和secondaryname所在节点:

    
    
      dfs.namenode.name.dir
      file:////data/hadoop/hdfs/name
      Determines where on the local filesystem the DFS name node
          should store the name table(fsimage).  If this is a comma-delimited list
          of directories then the name table is replicated in all of the
          directories, for redundancy. 
    
    
      dfs.datanode.data.dir
      file:///data/hadoop/hdfs/data
      Determines where on the local filesystem an DFS data node
      should store its blocks.  If this is a comma-delimited
      list of directories, then data will be stored in all named
      directories, typically on different devices.
      Directories that do not exist are ignored.
      
    
    
    dfs.namenode.secondary.http-address
    node102:50090
    
    
    3)core-site.xml 配置namenode:

    
    
    fs.defaultFS
      hdfs://node101:8020
    
    
    4) mapred-site.xml 配置map和reduce的资源:

    
    
      mapreduce.framework.name
      yarn
      The runtime framework for executing MapReduce jobs.
      Can be one of local, classic or yarn.
      
    
    
    
    
      mapreduce.jobhistory.address
      node102:10020
      MapReduce JobHistory Server IPC host:port
    
    
    
    
    mapreduce.map.memory.mb
    1024
    
    
    mapreduce.reduce.memory.mb
    1024
    
    
    mapreduce.map.java.opts
    -Xmx512m
    
    
    mapreduce.reduce.java.opts
    -Xmx512m
    
    
    5)yarn-site.xml 配置resourcemanager及相关资源:

    
    
     
        The hostname of the RM.
        yarn.resourcemanager.hostname
        node101
          
      
      
        The address of the applications manager interface in the RM.
        yarn.resourcemanager.address
        ${yarn.resourcemanager.hostname}:8032
      
    
      
        The address of the scheduler interface.
        yarn.resourcemanager.scheduler.address
        ${yarn.resourcemanager.hostname}:8030
      
    
      
        The http address of the RM web application.
        yarn.resourcemanager.webapp.address
        ${yarn.resourcemanager.hostname}:8088
      
    
      
        The https adddress of the RM web application.
        yarn.resourcemanager.webapp.https.address
        ${yarn.resourcemanager.hostname}:8090
      
    
      
        yarn.resourcemanager.resource-tracker.address
        ${yarn.resourcemanager.hostname}:8031
      
    
      
        The address of the RM admin interface.
        yarn.resourcemanager.admin.address
        ${yarn.resourcemanager.hostname}:8033
      
    
      
        List of directories to store localized files in. An 
          application's localized file directory will be found in:
          ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
          Individual containers' work directories, called container_${contid}, will
          be subdirectories of this.
       
        yarn.nodemanager.local-dirs
        /data/hadoop/yarn/local
      
    
      
        Whether to enable log aggregation
        yarn.log-aggregation-enable
        true
      
    
      
        Where to aggregate logs to.
        yarn.nodemanager.remote-app-log-dir
        /data/tmp/logs
      
    
      
        Amount of physical memory, in MB, that can be allocated 
        for containers.
        yarn.nodemanager.resource.memory-mb
        2048
      
    
    	yarn.scheduler.minimum-allocation-mb
    	512
    
    
    	yarn.nodemanager.vmem-pmem-ratio
    	1.0
    
    
    	yarn.nodemanager.vmem-check-enabled
    	false
    
    
    
    	yarn.nodemanager.resource.cpu-vcores
    	1
    
      
        the valid service name should only contain a-zA-Z0-9_ and can not start with numbers
        yarn.nodemanager.aux-services
        mapreduce_shuffle
      
      
        yarn.nodemanager.aux-services.mapreduce.shuffle.class
          org.apache.hadoop.mapred.ShuffleHandler
          
    

    2. Java调用Hadoop2.6 ,运行MR程序:

    需修改下面两个地方:

    1) 调用主程序的Configuration需要配置:

    Configuration conf = new Configuration();
        
        conf.setBoolean("mapreduce.app-submission.cross-platform", true);// 配置使用跨平台提交任务
        conf.set("fs.defaultFS", "hdfs://node101:8020");//指定namenode  
        conf.set("mapreduce.framework.name", "yarn");  // 指定使用yarn框架
        conf.set("yarn.resourcemanager.address", "node101:8032"); // 指定resourcemanager
        conf.set("yarn.resourcemanager.scheduler.address", "node101:8030");// 指定资源分配器
    2) 添加下面的类到classpath:



    ==



    ==


    其他地方不用修改,这样就可以运行;


    3. Web程序调用Hadoop2.6,运行MR程序;

    程序可以在java web程序调用hadoop2.6 下载;

    这个web程序调用部分和上面的java是一样的,基本都没有修改,所使用到的jar包也全部放在了lib下面。

    最后有一点,我运行了三个map,但是三个map不是均匀分布的:


    可以看到node103分配了两个map,node101分配了1一个map;还有一次是node101分配了两个map,node103分配了一个map;两次node102都没有分配到map任务,这个应该是资源管理和任务分配的地方还是有点问题的缘故。


    分享,成长,快乐

    转载请注明blog地址:http://blog.csdn.net/fansy1990




沪ICP备19023445号-2号
友情链接