`

apache tomcat负载均衡实验记录

 
阅读更多

 

http://www.iteye.com/topic/1017961

 

 

原理:tomcat 做个WEB服务器有它的局限性,处理能力低,效率低。承受并发小(1000左右)。但目前有不少网站或者页面是JSP的。并采用了tomcat做为WEB,因此只能在此基础上调优。
目前采取的办法是Apache + Mod_JK + tomcat 来解决一部分请求,用户访问的是apache,但有jsp页面的时候才会去请求tomcat。如果量一大,那么tomcat无法承受,那么只能做tomat集群,Apache + Mod_JK 就是负载均衡器了。
Mod_JK2负载均衡 可以把不同的jsp请求转发到不同的tomcat服务器,还可以侦测服务器存活。如果有条件可以给Mod_JK2做一个HA因为做完集群后压力就在JK上了。
      
简单拓仆图:
鎷撲粏鍥�

 

 

 

准备工作

Tomcat7 http://tomcat.apache.org/download-70.cgi

apache httpd server 2.2: http://httpd.apache.org/download.cgi

apache tomcat connector: http://archive.apache.org/dist/tomcat/tomcat-connectors/jk/binaries/win32/jk-1.2.31/

 

相关文档:

web server how to:

http://tomcat.apache.org/connectors-doc/webserver_howto/apache.html

 

安装路径:

httpd:             D:\Server\Apache httpd2_2

tomcat D:\Server\tomcat7-1 tomcat7-2 tomcat7-3

JK  D:\Server\Apache httpd2_2\modules\mod_jk-1.2.31-httpd-2.2.3.so

 

step 1: 添加并配置JK

D:\Server\Apache httpd2_2\conf\httpd.conf文件最后加上,意思是把这个配置加载进来

 

include conf\mod_jk.conf

 

新建mod_jk.conf文件,内容如下:

 

LoadModule jk_module modules/mod_jk-1.2.31-httpd-2.2.3.so
JkWorkersFile conf/workers.properties
#指定那些请求交给tomcat处理,"controller"为在workers.propertise里指定的负载分配控制器名
JkMount /*.jsp controller
 

 

Step 2: 配置worker

新建并编辑workers.properties文件,内容如下

 

#server
worker.list = controller
#========tomcat1========
worker.tomcat1.port=11009
worker.tomcat1.host=localhost
worker.tomcat1.type=ajp13
worker.tomcat1.lbfactor = 1
#========tomcat2========
worker.tomcat2.port=12009
worker.tomcat2.host=localhost
worker.tomcat2.type=ajp13
worker.tomcat2.lbfactor = 1
#========tomcat3========
worker.tomcat3.port=13009
worker.tomcat3.host=192.168.0.80 //在我的虚拟机中的,可以算远程的吧
worker.tomcat3.type=ajp13
worker.tomcat3.lbfactor = 1
 
#========controller,负载均衡控制器========
worker.controller.type=lb
worker.controller.balanced_workers=tomcat1,tomcat2,tomcat3
worker.controller.sticky_session=false
worker.controller.sticky_session_force=1
#worker.controller.sticky_session=1

 

如果三个tomcat不在同一台机器上,那么下面改端口的事情就可以省很多力气,不过因为要单机做负载均衡,所以要更改三个tomcat的8005,8080的端口,确保都不一样,不然tomcat是没办法同时启动三个的。

 

测试的时候我三个tomcat都放在本地,因为要同时启动三个tomcat,所以需要更改三个tomcat中的Connector端口号,将三个tomcat的的protocol="HTTP/1.1" 的connector的port改为10080,11080,12080。(原来是8080)

 

同时讲原来8005的端口分别改成10005,11005,12005(这个是关闭tomcat的端口号)

 

tomcat7-1

 

 

    <Connector port="10080" protocol="HTTP/1.1" connectionTimeout="20000" 
               redirectPort="8443" />
 

 

tomcat7-2

 

    <Connector port="11080" protocol="HTTP/1.1" connectionTimeout="20000" 
               redirectPort="8443" />
 

 

tomcat7-3

 

    <Connector port="12080" protocol="HTTP/1.1" connectionTimeout="20000" 
               redirectPort="8443" />
 

除了更改server.xml中 原来的8080的端口(protocol="HTTP/1.1" 的端口号

还需要配置AJP13的端口号,同时打开默认注释掉的<Cluster>标签,对应的<Engine>的jvmRoute改成workers.property里面对应的名字,配置如下(删除了注释)

tomcat7-1

 

<Connector port="11009" protocol="AJP/1.3" redirectPort="8443" />
    <Engine name="Catalina" defaultHost="localhost"        
           jvmRoute="tomcat1">  
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
 

tomcat7-2

 

<Connector port="12009" protocol="AJP/1.3" redirectPort="8443" />
    <Engine name="Catalina" defaultHost="localhost"        
           jvmRoute="tomcat2">  
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
 

 

tomcat7-3

 

<Connector port="13009" protocol="AJP/1.3" redirectPort="8443" />
    <Engine name="Catalina" defaultHost="localhost"        
           jvmRoute="tomcat3">  
    <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
 

 

OK,这时候可以成功启动三个tomcat7了,当tomcat7-2启动后,tomcat7-1会打印出replication的信息

类似于

 

2011-9-20 14:12:18 org.apache.catalina.ha.tcp.SimpleTcpCluster memberAdded
信息: Replication member added:org.apache.catalina.tribes.membership.MemberImpl[tcp://{172, 16, 10, 96}:4001,{172, 16, 10, 96},4001, alive=1000, securePort=-1, UDP Port=-1, id={109 112 -14 -8 -44 98 79 85 -89 -48 -33 -127 -47 -30
26 -75 }, payload={}, command={}, domain={}, ]
 

 

 

 

Step3: 建立测试项目
在项目的web.xml中添加<distributable/>
testlb.jsp:
<%@ page contentType="text/html; charset=GBK"%>
<%@ page import="java.util.*"%>
<html>
	<head>
		<title>Cluster App Test</title>
	</head>
	<body>
		Server Info:
		<%
		out.println(request.getLocalAddr() + " : " + request.getLocalPort() + "<br>");
		%>
		<%
			out.println("<br> ID " + session.getId() + "<br>");
			// 如果有新的 Session 属性设置
			String dataName = request.getParameter("dataName");
			if (dataName != null && dataName.length() > 0) {
				String dataValue = request.getParameter("dataValue");
				session.setAttribute(dataName, dataValue);
			}
			out.println("<b>Session 列表</b><br>");
			System.out.println("============================");
			Enumeration e = session.getAttributeNames();
			while (e.hasMoreElements()) {
				String name = (String) e.nextElement();
				String value = session.getAttribute(name).toString();
				out.println(name + " = " + value + "<br>");
				System.out.println(name + " = " + value);
			}
		%>
		<form action="testlb.jsp" method="POST">
			名称:
			<input type=text size=20 name="dataName">
			<br>
			值:
			<input type=text size=20 name="dataValue">
			<br>
			<input type=submit>
		</form>
	</body>
</html>
 
多次刷新页面的sessionID看是同一个ID,说明session是复制成功了。那么session中的存储的东西呢,在输入框中分别输入112233后,显示结果如下图: 


(以下为原文摘抄,我真的比较lazy)

 

以上的测试说明,集群中的session已经共享,每个集群对于同一访问均有相同的session,而且session中存储的变量也复制了。

 

节点插拔测试

插拔意思是应该保证当运行的集群中某节点中关闭或者启动时,集群正常工作并且节点能够正常工作。

下面描述测试过程了,贴图太占地方了。

关闭Tomcat2,刷新页面,则不断访问Tocmat1Tomcat3,再关闭Tomcat1后,则只访问一个Tomcat3,说明节点关闭时运行正常。

如果重启Tomcat2,无论怎么刷新,始终访问Tomcat3,难道Apache不能将请求转发给中途启动的Tomcat2?。。。这时利用另外台机器访问页面,发现Tomcat2正常,然后在刷本地页面,又可以访问Tomcat2了。

从上面可以看出Apache的负载均衡时的算法了,对于每个新来的sessionApache按照节点配置中的lbfactor比重选择访问节点,如果某节点node1不能访问,则寻找下一可访问节点,并且将此node1就在该访问session的访问黑名单中,以后该session的访问直接不考虑node1,即使node1又可以访问了。而新来的session是无黑名单的,如果新的session能够访问到node1了,则会将node1在其他所有session访问的黑名单删除,这样其他session就又能访问node1节点了。以上只是个人经过测试后的猜想。

经过以上测试,说明Tomcat集群和负载均衡已经实现了。

 

关于集群我还有些疑问,所以又测试了下,直接把结论写出来:

1.集群下的相同的应用可以名称不同(好像没必要啊),只要配置server.xmlhost下的context具有相同的path即可。

2. 如果应用名称可以不同,那么应用下内容是否可以不同呢(这里考虑将不同应用通过集群看起来是一个应用,并且共享session),然后集群下不同应用映射为相同的访问path,具有相同的路径则负载,如果某路径只某个应用具有,则一直访问该应用。可现实很骨干啊,答案是否定的,至少我以上的配置不能实现。如果访问只有某应用具有的特别路径,那么只有负载到该应用才可以访问,否则直接路径未找到的错误页面了。

 

 

 如果您看过网上其他Apache+Tomcat的集群配置,您可能有的疑问?

1.网上大部分的文章配置2个tocmat的集群,有的将workers.properties下的worker.controller.sticky_session=1,
然后tomcat1中的server.xml中的jvmRoute设置为tomcat2,将tomcat2中的jvmRoute设置为tocmat1,当然我这样设置
也成功了,但是如果3个或者更多tocmat呢,怎么设置每个tomcat的jvmRoute,我不会所以才考虑现在的配置

2.server.xml中的Cluster配置问题,网上大部分都是使用BackupManager方式,即Cluster下又粘贴了一堆配置。其实
只要将其中注释掉的<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>去掉注释就完成session的集群
复制了。只是这俩种复制采用的方式不同而已。http://tomcat.apache.org/tomcat-6.0-doc/cluster-howto.html
这页面已经说的挺清楚了,集群的session复制默认是DeltaManager,是all to all的复制,意思是将集群下1个tomcat应用下的session
对所有的集群中的节点进行复制,即使那个节点没有发布应用。显然是不好的方式,但这在小规模的集群下并没神马问题。
而采用BackupManager,就是众多网上配置那样,对于需要复制的节点设置BackupManager自然也没问题,
但是它的性能并没有DeltaManager 好使“ Downside of the BackupManager: not quite as battle tested as the delta manager”。
因此,具体怎么设置就看大家了,通常说如果不是大规模集群,就默认就好了。反正我自己翻译的就是这个意思了,希望没有误导大家。

 

最后一个比较全的关于session 同步使用jdbc方式的帖子

http://www.datadisk.co.uk/html_docs/java_app/tomcat6/tomcat6_clustering.htm

 

  • 大小: 6.1 KB
  • 大小: 5.7 KB
  • 大小: 5.5 KB
分享到:
评论
2 楼 cuisuqiang 2014-08-19  
smallbee 写道
信息: Initializing Coyote HTTP/1.1 on http-10080
2012-8-23 16:09:29 org.apache.catalina.startup.Catalina load
信息: Initialization processed in 482 ms
2012-8-23 16:09:29 org.apache.catalina.core.StandardService start
信息: Starting service Catalina
2012-8-23 16:09:29 org.apache.catalina.core.StandardEngine start
信息: Starting Servlet Engine: Apache Tomcat/6.0.20
2012-8-23 16:09:29 org.apache.catalina.ha.tcp.SimpleTcpCluster start
信息: Cluster is about to start
2012-8-23 16:09:29 org.apache.catalina.tribes.transport.ReceiverBase bind
信息: Receiver Server Socket bound to:/99.6.150.31:4001
2012-8-23 16:09:29 org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
信息: Setting cluster mcast soTimeout to 500
2012-8-23 16:09:29 org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
信息: Sleeping for 1000 milliseconds to establish cluster membership, start level:4
2012-8-23 16:09:30 org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
信息: Done sleeping, membership established, start level:4
2012-8-23 16:09:30 org.apache.catalina.ha.tcp.SimpleTcpCluster start
严重: Unable to start cluster.
org.apache.catalina.tribes.ChannelException: java.net.ConnectException: Connection refused: Datagram send failed; No faulty members identified.
at org.apache.catalina.tribes.group.ChannelCoordinator.internalStart(ChannelCoordinator.java:169)
at org.apache.catalina.tribes.group.ChannelCoordinator.start(ChannelCoordinator.java:97)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.start(ChannelInterceptorBase.java:149)
at org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor.start(MessageDispatchInterceptor.java:147)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.start(ChannelInterceptorBase.java:149)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.start(ChannelInterceptorBase.java:149)
at org.apache.catalina.tribes.group.GroupChannel.start(GroupChannel.java:407)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:671)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1035)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:516)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)
Caused by: java.net.ConnectException: Connection refused: Datagram send failed
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:612)
at org.apache.catalina.tribes.membership.McastServiceImpl.send(McastServiceImpl.java:385)
at org.apache.catalina.tribes.membership.McastServiceImpl.start(McastServiceImpl.java:244)
at org.apache.catalina.tribes.membership.McastService.start(McastService.java:318)
at org.apache.catalina.tribes.group.ChannelCoordinator.internalStart(ChannelCoordinator.java:158)
... 18 more
2012-8-23 16:09:30 org.apache.catalina.startup.Catalina start
严重: Catalina.start:
LifecycleException:  org.apache.catalina.tribes.ChannelException: java.net.ConnectException: Connection refused: Datagram send failed; No faulty members identified.
at org.apache.catalina.ha.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:678)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1035)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:516)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)
2012-8-23 16:09:30 org.apache.catalina.startup.Catalina start
信息: Server startup in 1210 ms

按照您的配置,报错如上,请问遇到过么?

hostname和hosts配置文件不一致:http://www.javacui.com/service/136.html
1 楼 smallbee 2012-08-23  
信息: Initializing Coyote HTTP/1.1 on http-10080
2012-8-23 16:09:29 org.apache.catalina.startup.Catalina load
信息: Initialization processed in 482 ms
2012-8-23 16:09:29 org.apache.catalina.core.StandardService start
信息: Starting service Catalina
2012-8-23 16:09:29 org.apache.catalina.core.StandardEngine start
信息: Starting Servlet Engine: Apache Tomcat/6.0.20
2012-8-23 16:09:29 org.apache.catalina.ha.tcp.SimpleTcpCluster start
信息: Cluster is about to start
2012-8-23 16:09:29 org.apache.catalina.tribes.transport.ReceiverBase bind
信息: Receiver Server Socket bound to:/99.6.150.31:4001
2012-8-23 16:09:29 org.apache.catalina.tribes.membership.McastServiceImpl setupSocket
信息: Setting cluster mcast soTimeout to 500
2012-8-23 16:09:29 org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
信息: Sleeping for 1000 milliseconds to establish cluster membership, start level:4
2012-8-23 16:09:30 org.apache.catalina.tribes.membership.McastServiceImpl waitForMembers
信息: Done sleeping, membership established, start level:4
2012-8-23 16:09:30 org.apache.catalina.ha.tcp.SimpleTcpCluster start
严重: Unable to start cluster.
org.apache.catalina.tribes.ChannelException: java.net.ConnectException: Connection refused: Datagram send failed; No faulty members identified.
at org.apache.catalina.tribes.group.ChannelCoordinator.internalStart(ChannelCoordinator.java:169)
at org.apache.catalina.tribes.group.ChannelCoordinator.start(ChannelCoordinator.java:97)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.start(ChannelInterceptorBase.java:149)
at org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor.start(MessageDispatchInterceptor.java:147)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.start(ChannelInterceptorBase.java:149)
at org.apache.catalina.tribes.group.ChannelInterceptorBase.start(ChannelInterceptorBase.java:149)
at org.apache.catalina.tribes.group.GroupChannel.start(GroupChannel.java:407)
at org.apache.catalina.ha.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:671)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1035)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:516)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)
Caused by: java.net.ConnectException: Connection refused: Datagram send failed
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:612)
at org.apache.catalina.tribes.membership.McastServiceImpl.send(McastServiceImpl.java:385)
at org.apache.catalina.tribes.membership.McastServiceImpl.start(McastServiceImpl.java:244)
at org.apache.catalina.tribes.membership.McastService.start(McastService.java:318)
at org.apache.catalina.tribes.group.ChannelCoordinator.internalStart(ChannelCoordinator.java:158)
... 18 more
2012-8-23 16:09:30 org.apache.catalina.startup.Catalina start
严重: Catalina.start:
LifecycleException:  org.apache.catalina.tribes.ChannelException: java.net.ConnectException: Connection refused: Datagram send failed; No faulty members identified.
at org.apache.catalina.ha.tcp.SimpleTcpCluster.start(SimpleTcpCluster.java:678)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1035)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443)
at org.apache.catalina.core.StandardService.start(StandardService.java:516)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:710)
at org.apache.catalina.startup.Catalina.start(Catalina.java:583)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:413)
2012-8-23 16:09:30 org.apache.catalina.startup.Catalina start
信息: Server startup in 1210 ms

按照您的配置,报错如上,请问遇到过么?

相关推荐

Global site tag (gtag.js) - Google Analytics