首页 诗词 字典 板报 句子 名言 友答 励志 学校 网站地图
当前位置: 首页 > 教程频道 > 其他教程 > 开源软件 >

Hadoop源码解读-Http服务器Jetty的施用

2012-08-09 
Hadoop源码解读-Http服务器Jetty的使用?Hadoop内嵌了Http服务器Jetty,主要有以下两方面的作用1、Web访问接

Hadoop源码解读-Http服务器Jetty的使用

?

Hadoop内嵌了Http服务器Jetty,主要有以下两方面的作用

1、Web访问接口,用于展示Hadoop的内部状态

2、参与Hadoop集群的运行和管理

?

以Namenode为例

Namenode通过

startHttpServer(conf);

?来启动HttpServer(Jetty),具体代码如下

          httpServer = new HttpServer("hdfs", infoHost, infoPort,               infoPort == 0, conf,               SecurityUtil.getAdminAcls(conf, DFSConfigKeys.DFS_ADMIN));

? 深入HttpServer可以看到,上面的代码是将HADOOP_HOME\webapp下面的hdfs目录作为了jetty的默认Context(datanode就用datanode目录,jobtracker为job目录,tasktracker为task目录,secondarynamenode为secondary目录)

    webAppContext = new WebAppContext();    webAppContext.setDisplayName("WepAppsContext");    webAppContext.setContextPath("/");    webAppContext.setWar(appDir + "/" + name);    webAppContext.getServletContext().setAttribute(CONF_CONTEXT_ATTRIBUTE, conf);    webAppContext.getServletContext().setAttribute(ADMINS_ACL, adminsAcl);    webServer.addHandler(webAppContext);

?同时添加了对Log和webapps下面static资源(css、js、pic)的访问

  protected void addDefaultApps(ContextHandlerCollection parent,      final String appDir) throws IOException {    // set up the context for "/logs/" if "hadoop.log.dir" property is defined.     String logDir = System.getProperty("hadoop.log.dir");    if (logDir != null) {      Context logContext = new Context(parent, "/logs");      logContext.setResourceBase(logDir);      logContext.addServlet(AdminAuthorizedServlet.class, "/");      logContext.setDisplayName("logs");      setContextAttributes(logContext);      defaultContexts.put(logContext, true);    }    // set up the context for "/static/*"    Context staticContext = new Context(parent, "/static");    staticContext.setResourceBase(appDir + "/static");    staticContext.addServlet(DefaultServlet.class, "/*");    staticContext.setDisplayName("static");    setContextAttributes(staticContext);    defaultContexts.put(staticContext, true);  }

以及一些状态信息的访问

  /**   * Add default servlets.   */  protected void addDefaultServlets() {    // set up default servlets    addServlet("stacks", "/stacks", StackServlet.class);    addServlet("logLevel", "/logLevel", LogLevel.Servlet.class);    addServlet("metrics", "/metrics", MetricsServlet.class);    addServlet("conf", "/conf", ConfServlet.class);    addServlet("jmx", "/jmx", JMXJsonServlet.class);  }
?

最后,返回namenode,又添加了一些namenode特有的访问接口,例如

/fsck用于文件系统的检查

/getimage是SecondaryNamenode获取image的入口

?

          httpServer.addInternalServlet("getDelegationToken",                                         GetDelegationTokenServlet.PATH_SPEC,                                         GetDelegationTokenServlet.class, true);          httpServer.addInternalServlet("renewDelegationToken",                                         RenewDelegationTokenServlet.PATH_SPEC,                                         RenewDelegationTokenServlet.class, true);          httpServer.addInternalServlet("cancelDelegationToken",                                         CancelDelegationTokenServlet.PATH_SPEC,                                         CancelDelegationTokenServlet.class,                                        true);          httpServer.addInternalServlet("fsck", "/fsck", FsckServlet.class, true);          httpServer.addInternalServlet("getimage", "/getimage",               GetImageServlet.class, true);          httpServer.addInternalServlet("listPaths", "/listPaths/*",               ListPathsServlet.class, false);          httpServer.addInternalServlet("data", "/data/*",               FileDataServlet.class, false);          httpServer.addInternalServlet("checksum", "/fileChecksum/*",              FileChecksumServlets.RedirectServlet.class, false);          httpServer.addInternalServlet("contentSummary", "/contentSummary/*",              ContentSummaryServlet.class, false);          httpServer.start();                // The web-server port can be ephemeral... ensure we have the correct info          infoPort = httpServer.getPort();          httpAddress = new InetSocketAddress(infoHost, infoPort);          conf.set("dfs.http.address", infoHost + ":" + infoPort);          LOG.info("Web-server up at: " + infoHost + ":" + infoPort);          return httpServer;

?

再打开hdfs目录,会发现index页面会直接转跳到dfshealth.jsp,查看web.xml

    <servlet-mapping>        <servlet-name>org.apache.hadoop.hdfs.server.namenode.dfshealth_jsp</servlet-name>        <url-pattern>/dfshealth.jsp</url-pattern>    </servlet-mapping>

?dfshealth_jsp.class可以从hadoop-core-xxx.jar里面找到

?

?

java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.server.namenode.dfshealth_jspat java.net.URLClassLoader$1.run(URLClassLoader.java:200)at java.security.AccessController.doPrivileged(Native Method)at java.net.URLClassLoader.findClass(URLClassLoader.java:188)at java.lang.ClassLoader.loadClass(ClassLoader.java:307)at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
等之外,其他还算正常。java.lang.ClassNotFoundException: org.apache.hadoop.hdfs.server.namenode.dfshealth_jspat java.net.URLClassLoader$1.run(URLClassLoader.java:200)at java.security.AccessController.doPrivileged(Native Method)at java.net.URLClassLoader.findClass(URLClassLoader.java:188)at java.lang.ClassLoader.loadClass(ClassLoader.java:307)at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)at java.lang.ClassLoader.loadClass(ClassLoader.java:252)
等之外,其他还算正常。

1.0.1版本不是很清楚,我这儿说的是cloudera cdh3u2版本的
dfshealth_jsp.java是ant脚本生成的,你需要先运行一下ant
然后把build目录下面的src里面的java文件也加到classpath里面去

热点排行