ADSL Bandwidth Management HOWTO 作者:Dan Singletary dvsing@sonicspike.net 译者:陈敏剑 expns@yahoo.com _________________________________________________________________ 这份文档描述如何将 Linux 设定成拥有带宽管理功能的路由器,有效地管理ADSL 和其它bandwidth 设备(cable modem, ISDN, 等等) _________________________________________________________________ 1. 介绍 * 1.1 文档的最新版本 * 1.2 邮件列表 * 1.3 声明 * 1.4 智慧财产权和许可 * 1.5 反馈与修正 2. 背景 * 2.1 必要条件 * 2.2 布局 * 2.3 Packet Queues(数据包队列) 3. 工作原理 * 3.1 Throttling Outbound Traffic with Linux HTB(使用HTB控制出站通 讯) * 3.2 Priority Queuing with HTB(使用 HTB 设定队列优先权) * 3.3 使用iptables 划分出站的通讯 * 3.4 还可以再挖掘一下 * 3.5 Attempting to Throttle Inbound Traffic(控制入站的通讯) * 3.6 为什麽入站的通讯限制看起来不怎麽样 4. 执行 * 4.1 Caveats * 4.2 Script: myshaper 5. 测试 6. OK It Works!! Now What? _________________________________________________________________ 1. 介绍 文档的目地是提供一个可行的方法管理ADSL(cable mode)出站的通信. 1.1 文档的最新版本 您可以在 [1]http://www.tldp.org 找到这份文档的最新版本. 1.2 邮件列表 有关ADSL Bandwidth Manage 的问题和信息请订阅: [2]jared.sonicspike.net 1.3 声明 如果采用了这份HOWTO当中的方法而对设备或造成任何陨失,无论是作者, 散布者 或对这份HOWTO有贡献的人都将拒绝承担任何责任. 1.4 智慧财产权和许可 此HOWTO的智慧财产权为Dan Singletary所有: This document is copyright 2002 by Dan Singletary, and is released under the terms of the GNU Free Documentation License, which is hereby incorporated by reference. 1.5 反馈与修正 如果您对HOWTO有什麽问题或看法,请在有空的时候给作者 来e-mail:dvsing@sonicspike.net 2. 背景 2.1 必要条件 要点: 这些方法尽管没有在其它的发行版中试验过,我想它正常工作大概没什麽 问题.下面是运行的环境: * Red Hat Linux 7.3 * 2.4.18-5 完全支持 QoS 的核心版本 (模组也可以) 包含以下的patches ( 补丁)(可能会最终加入到最新的核心当中): * HTB queue - [3]http://luxik.cdi.cz/~devik/qos/htb/ 注意: Mandrake( 曼德莱克8.1, 8.2)的核心自 2.4.18-3 起就有了HTB 的 patches. * IMQ device - [4]IMQ device - http://luxik.cdi.cz/~patrick/imq/ * iptables 版本v1.2.6a 或更新的(version of iptables distributed with Red Hat 7.3 is missing the length module) Note: Previous versions of this document specified a method of bandwidth control that involved patching the existing sch_prio queue. It was found later that this patch was entirely unnecessary. Regardless, the newer methods outlined in this document will give you better results (although at the writing of this document 2 kernel patches are now necessary. :) Happy patching.) 2.2 布局 化繁为简,所有的设定依照下面这个布局进行: ______________________________________________________________ <-- 128kbit/s -------------- <-- 10Mbit --> Internet <--------------------> | ADSL Modem | <-------------------- 1.5Mbit/s --> -------------- | | eth0 V ----------------- | | | Linux Router | | | ----------------- | .. | eth1..ethN | | V V Local Network ______________________________________________________________ 2.3 Packet Queues(数据包队列) Packet Queues是一个容器, 当数据不能被网络设备立既送走的时候, Packet Queues 负责暂时收留它们. 除非被设定成另外一种方式,否则数据包是按 FIFO (first in, first out 最早进入Queues的数据将被最快发送走) 进行排队. The Upstream(向上传输) ADSL的带宽由不对称的 1.5Mbit/s downstream(向下传输)和128kbit/sec upstream(向上传输)组成. Linux 路由器(主机)同ADSL modem的连结速率 在10Mbits/s左右.如果 Linux 路由器同 Local Network(本地网络) 的连结速率 也在10Mbits/s左右,路由器和Local Network(本地网络)的Queues(队列)就不会 存在.但以10Mbits/s到达ADSL modem的数据包却要以128kbit/sec 传输 到Internet.因此数据包将在ADSL modem形成Queues,ADSL modem将不能应付而产 生数据包丢失现象. TCP就是用来控制类似这样的情况,它调整传输窗口的大小以 达到利用带宽的最佳效果. TCP控制Queues(队列)以利用带宽. 较大的FIFO Queues将延长数据包的传送时 间. 另一种同FIFO有点相似的Queues(队列)是 n-band priority queue, 它取代FIFO 只有一个队列的做法, 数据包分级别排出多个FIFO Queues(队列), 每一 个Queues都有优先级别的设定, 总是从级别高的Queues将数据dequeued(出列). 使用这种方法,FTP和telnet同时上载数据包的时候, telnet的数据包将得到更高 的优先级别.单独的telnet数据包将被立既发送. Linux 使用一种新的Queues: Hierarchical Token Bucket (HTB 译为分级型式 的队列容器). 它有点像n-band priority queue, 但n-band priority queue在 每个级别中只有限制数据通讯的能力. HTB有一项更加先进的功能:在已有的级别 之上能够建立一个新的级别通讯.更多的资讯请参照: [5] http://www.lartc.org/ The Downstream(向下传输) 从Internet发送至ADSL modem的数据包入站和数据包出站的Queues大至相同. 不 管怎样, queue 会集在您的ISP那里. 因为这样您大概不能直接控制数据包如何 排队或以哪种形式分配优先权. 只有一种方法来缩短这里的反应时间:期望向您 发送数据包的时候不要太快. 不幸的是,您无法直接控制数据包的到达速度. 这 里有一些方法将发送者的速度减慢: * 故意将入站数据包丢弃. TCP is designed to take full advantage of the available bandwidth while also avoiding congestion of the link. This means that during a bulk data transfer TCP will send more and more data until eventually a packet is dropped. TCP detects this and reduces it's transmission window. This cycle continues throughout the transfer and assures data is moved as quickly as possible. * 操纵advertised receive window(广告接收窗)- During a TCP transfer, the receiver sends back a continuous stream of acknowledgment (ACK) packets. Included in the ACK packets is a window size advertisement which states the maximum amount of unacknowledged data the receiver should send. By manipulating the window size of outbound ACK packets we can intentionally slow down the sender. At the moment there is no (free) implementation for this type of flow-control on Linux (however I may be working on one!). 3. 工作原理 有几个步骤可以优化upstream bandwidth(向上传输的带宽).第一是将Linux路由 器至ADSL modem的传输带宽降低到 ADSL modem至Internet的带宽以下.在 Linux 路由器形成数据包队列. 第二,在路由器设定队列的优先权和组织方法. 我们将从telnet , 多人连线游戏以及交互软体来考查队列的优先权. 使用 HTB 控制队列,我们可以同时设定带宽控制和队列优先权,并且优先级别不 会相互制约. 第三,设定防火墙使用fwmark区分数据包的次序. 3.1 Throttling Outbound Traffic with Linux HTB(使用HTB控制出站通讯) 我们将使用HTP控制数据包到达 ADSL modem 的速率, 为了缩短反应时间,我们必 需保证不在 ADSL modem 形成哪怕是只有一个数据包的队列. Note: previous claims in this section (originally named N-band priority queuing) were later found to be incorrect. It actually WAS possible to classify packets into the individual bands of the priority queue by only using the fwmark field, however it was poorly documented at the writing of version 0.1 of this document 3.2 Priority Queuing with HTB(使用 HTB 设定队列优先权) 现在,我们仍不知如何完善性能, 我们只是将队列从ADSL modem 转移到Linux路 由器上而巳. 如果现在有100个 数据包的普通队列出现在当前的设定中,我将不 敢想像它的结果, 但这只是一时的危机而巳. HTB当中每个相邻的队列可以分配到一个优先权.在不同的级别当中设定不同的类 型.自从我们可以为每个级别设定一个最小保证值, 我们就拥有了控制数据包的 出列和发送次序能力. HTB可以很好地做到这点并且不会让优先级相互制约.. 设定了级别以後,我们使用过滤器将通信进行级别划分.有几种方法可以实现,但 我们只介绍常用的iptables/ipchains. 我们将使用iptables设定一些规则将不 同的通信划入到不同的级别当中. 3.3 使用iptables 划分出站的通讯 Note: originally this document used ipchains to classify packets. The newer iptables is now used. 这里有一个简单的描述,出站的数据包如何从0x00的等级开始,划入4个不同的等 级当中: * 将所有数据包的级别设为0x03,这是最低的级别. * 将ICMP的数据包级别设为0x00, 想让ping的反应更快,就必需得到最高级别 的优先权. * 将所有发往目标端口为25的数据包级别设定为0x03,如果有谁发送的e-mail 带有一个很大的附件, 我们的通讯就会像陷入沼泽一样寸步难行, 当然,我 们并不想那样. * 将所有发往游戏服务器的数据包级别设定为0x02,这将给游戏一个适中的反 应时间. but will keep them from swamping out the system applications that require low latency. * 将所有发往目标端口为1024或更低的数据包级别设定为0x01,表示 给telnet,SSH等类型的系统服务提供优先权. Ftp的端口也在这个□围之内. 将任何"较小"的数据包级别设定为0x02,Outbound ACK packets from inbound downloads should be sent promptly to assure efficient downloads. This is possible using the iptables length module. 当然,它还可以依据您的需求来设定. 3.4 还可以再挖掘一下 要加快反应您至少要做两件以上的事情. 首先, 将最大传输单元(MTU)设定 在1500bytes以下, 降低这个值就会缩短平均等待时间, 这会减轻网络的负载(恢 复了实际可用的吞吐量),因为每个数据包中有40bytes的IP和TCP资讯. 另外加快 反应的方法是将队列长度缩短至100以下,这可以省去ADSL10秒相当於清空一 个1500byteMTU的时间. 3.5 Attempting to Throttle Inbound Traffic(控制入站的通讯) 通过使用 Intermediate Queuing Device (IMQ)队列中间件, 我们可以像处理出 站数据包一样将入站数据包送入队列当中. 这个案例中的数据包优先权非常简 单. 将不属於TCP□围内的通讯级别设定为 0x00, 属於TCP□围内的通讯级别设 定为 0x01, 也可以将"较小"的TCP数据包通讯级别设定为 0x00,我们将把标准 的FIFO队列级别设定为 0x00 , 我们把Random Early Drop (RED) 队列级别设定 为0x01 RED将在数据包看起来失去控制的时候(队列将要溢出), 减慢传输或将数 据包丢弃. 我们将最大化入站速率(速率小於实际能够达到的).We'll also rate-limit both classes to some maximum inbound rate which is less than your true inbound speed over the ADSL modem. 3.6 为什麽入站的通讯限制看起来不怎麽样 我们必需限制入站的通讯,以防止ISP的队列饱和, 这样相当於缓冲5秒的数据, 问题是现在唯一的控制途径是将数据包丢弃.这些数据包以经从ADSL modedm那里 得到了一些带宽. 但是这些数据包却被丢弃了,这些被丢弃的数据包最终会吃掉 更多的带宽. 当我们限制通讯的时候, 我们限制了来自本地网络的数据包传送比 率. 因为因为我们丢弃的那些数据包所以实际入站的传送比率在此之上. 我们实 际上限制的入站比ADSL modem实际能达到的比率还要低. 在实际当中, 我将自己 的1.5mbit/s downstream ADSL 限制在700kbit/sec ,使它能并发5个下载的连 结. TCP会话越多,浪费在丢弃数据包的带宽就越多,并且数率比您的限制还要低. 更好的途径来控制TCP通讯是操作 TCP window, 但是这个好像离题了(我知道有 一种...) 4. 执行 4.1 Caveats 限制发送至DSL modem的数据速率不像看起似的那麽简单. 大多数 DSL modems 以经真正地在您的ISP闸道和 linux box 之间建立了传输数据的以太网桥接. 大 多数的 DSL modems 使用ATM作为发送数据的连接层. ATM 总是以53bytes/单元 的形式发送数据.这些数据当中的 5bytes 是信息头 ,馀下的48bytes才是传输的 数据.既使您发送1byte的数据,也将因为ATM 总是以 53bytes/单元 的形式发送 数据而消耗53bytes的带宽. 这表示您将发送一个由 0 bytes 数据 + 20 bytes TCP 报头 + 20 bytes IP 报头 + 18 bytes 以太网报头 组成的TCP ACK数据包. 实际上,既使您发送的以太网数据包只有40bytes的有效负载 (TCP and IP header), 最小的以太网数据包有效负载数据是46bytes,所以另外的6bytes是空 的负载. 这意味著实际以太网数据包加上报头是 18 + 46 = 64 bytes. 在ATM的 规则中,如果发送64bytes的数据,您将发送两个总共占据106bytes带宽的ATM cells(单元). 这表示每发送一个TCP ACK 数据包, 您会浪费掉42bytes的带宽. 如果 Linux 计算 DSL modem 使用的封装就没什麽问题了, 但是, Linux 只计算 TCP header, IP header, 和 14 bytes 的 MAC 地址. (Linux 不计算 4 bytes 的 CRC 因为这是用来控制硬体层的). Linux 不会将以太网数据包的最小值计算 为 46 bytes, 也不会去计算固定的 ATM 单元的大小. 这些所有的都表示您限制的出站带宽比实际上的要低一点.您必需找到最适合您 自己的限制值. 但是当您下载一个大文件时网络的反应时间就会暴涨至3秒以上. 因为Linux在带宽消耗计算的误差, 所以这很可发生. I have been working on a solution to this problem for a few months and have almost settled on a solution that I will soon release to the public for further testing. The solution involves using a user-space queue instead of linux's QoS to rate-limit packets. I've basically implemented a simple HTB queue using linux user-space queues. This solution (so far) has been able to regulate outbound traffic SO WELL that even during a massive bulk download (several streams) and bulk upload (gnutella, several streams) the latency PEAKS at 400ms over my nominal no-traffic latency of about 15ms. For more information on this QoS method, subscribe to the email list for updates or check back on updates to this HOWTO. 4.2 Script: myshaper 下面是我用来控制自己路由器的script. 出站的通讯依据类型放入至7个队列当 中. 入站的通讯放入至两个与TCP数据(如果入站数据超出速率,TCP数据包就被丢 弃)有关的队列中(lowest priority). script 当中给出的速率看上去工作得很 好,这是适合我自己的设定,对於您来说结果可能不大相同. 这个 script 是在 ADSL WonderShaper 的基础上写出来的,请参照: [6]LARTC website. ______________________________________________________________ #!/bin/bash # # myshaper - DSL/Cable modem outbound traffic shaper and prioritizer. # Based on the ADSL/Cable wondershaper (www.lartc.org) # # Written by Dan Singletary (8/7/02) # # NOTE!! - This script assumes your kernel has been patched with the # appropriate HTB queue and IMQ patches available here: # (subnote: future kernels may not require patching) # # http://luxik.cdi.cz/~devik/qos/htb/ # http://luxik.cdi.cz/~patrick/imq/ # # Configuration options for myshaper: # DEV - set to ethX that connects to DSL/Cable Modem # RATEUP - set this to slightly lower than your # outbound bandwidth on the DSL/Cable Modem. # I have a 1500/128 DSL line and setting # RATEUP=90 works well for my 128kbps upstream. # However, your mileage may vary. # RATEDN - set this to slightly lower than your # inbound bandwidth on the DSL/Cable Modem. # # # Theory on using imq to "shape" inbound traffic: # # It's impossible to directly limit the rate of data that will # be sent to you by other hosts on the internet. In order to shape # the inbound traffic rate, we have to rely on the congestion avoidance # algorithms in TCP. Because of this, WE CAN ONLY ATTEMPT TO SHAPE # INBOUND TRAFFIC ON TCP CONNECTIONS. This means that any traffic that # is not tcp should be placed in the high-prio class, since dropping # a non-tcp packet will most likely result in a retransmit which will # do nothing but unnecessarily consume bandwidth. # We attempt to shape inbound TCP traffic by dropping tcp packets # when they overflow the HTB queue which will only pass them on at # a certain rate (RATEDN) which is slightly lower than the actual # capability of the inbound device. By dropping TCP packets that # are over-rate, we are simulating the same packets getting dropped # due to a queue-overflow on our ISP's side. The advantage of this # is that our ISP's queue will never fill because TCP will slow it's # transmission rate in response to the dropped packets in the assumption # that it has filled the ISP's queue, when in reality it has not. # The advantage of using a priority-based queuing discipline is # that we can specifically choose NOT to drop certain types of packets # that we place in the higher priority buckets (ssh, telnet, etc). This # is because packets will always be dequeued from the lowest priority class # with the stipulation that packets will still be dequeued from every # class fairly at a minimum rate (in this script, each bucket will deliver # at least it's fair share of 1/7 of the bandwidth). # # Reiterating main points: # * Dropping a tcp packet on a connection will lead to a slower rate # of reception for that connection due to the congestion avoidance algorith m. # * We gain nothing from dropping non-TCP packets. In fact, if they # were important they would probably be retransmitted anyways so we want to # try to never drop these packets. This means that saturated TCP connectio ns # will not negatively effect protocols that don't have a built-in retransmi t like TCP. # * Slowing down incoming TCP connections such that the total inbound rate is less # than the true capability of the device (ADSL/Cable Modem) SHOULD result i n little # to no packets being queued on the ISP's side (DSLAM, cable concentrator, etc). Since # these ISP queues have been observed to queue 4 seconds of data at 1500Kbp s or 6 megabits # of data, having no packets queued there will mean lower latency. # # Caveats (questions posed before testing): # * Will limiting inbound traffic in this fashion result in poor bulk TCP per formance? # - Preliminary answer is no! Seems that by prioritizing ACK packets (smal l <64b) # we maximize throughput by not wasting bandwidth on retransmitted packet s # that we already have. # # NOTE: The following configuration works well for my # setup: 1.5M/128K ADSL via Pacific Bell Internet (SBC Global Services) DEV=eth0 RATEUP=90 RATEDN=700 # Note that this is significantly lower than the capacity of 1500. # Because of this, you may not want to bother limiting inbound traf fic # until a better implementation such as TCP window manipulation can be used. # # End Configuration Options # if [ "$1" = "status" ] then echo "[qdisc]" tc -s qdisc show dev $DEV tc -s qdisc show dev imq0 echo "[class]" tc -s class show dev $DEV tc -s class show dev imq0 echo "[filter]" tc -s filter show dev $DEV tc -s filter show dev imq0 echo "[iptables]" iptables -t mangle -L MYSHAPER-OUT -v -x 2> /dev/null iptables -t mangle -L MYSHAPER-IN -v -x 2> /dev/null exit fi # Reset everything to a known state (cleared) tc qdisc del dev $DEV root 2> /dev/null > /dev/null tc qdisc del dev imq0 root 2> /dev/null > /dev/null iptables -t mangle -D POSTROUTING -o $DEV -j MYSHAPER-OUT 2> /dev/null > /dev/n ull iptables -t mangle -F MYSHAPER-OUT 2> /dev/null > /dev/null iptables -t mangle -X MYSHAPER-OUT 2> /dev/null > /dev/null iptables -t mangle -D PREROUTING -i $DEV -j MYSHAPER-IN 2> /dev/null > /dev/nul l iptables -t mangle -F MYSHAPER-IN 2> /dev/null > /dev/null iptables -t mangle -X MYSHAPER-IN 2> /dev/null > /dev/null ip link set imq0 down 2> /dev/null > /dev/null rmmod imq 2> /dev/null > /dev/null if [ "$1" = "stop" ] then echo "Shaping removed on $DEV." exit fi ########################################################### # # Outbound Shaping (limits total bandwidth to RATEUP) # set queue size to give latency of about 2 seconds on low-prio packets ip link set dev $DEV qlen 30 # changes mtu on the outbound device. Lowering the mtu will result # in lower latency but will also cause slightly lower throughput due # to IP and TCP protocol overhead. ip link set dev $DEV mtu 1000 # add HTB root qdisc tc qdisc add dev $DEV root handle 1: htb default 26 # add main rate limit classes tc class add dev $DEV parent 1: classid 1:1 htb rate ${RATEUP}kbit # add leaf classes - We grant each class at LEAST it's "fair share" of bandwidt h. # this way no class will ever be starved by another class. Each # class is also permitted to consume all of the available ba ndwidth # if no other classes are in use. tc class add dev $DEV parent 1:1 classid 1:20 htb rate $[$RATEUP/7]kbit ceil ${ RATEUP}kbit prio 0 tc class add dev $DEV parent 1:1 classid 1:21 htb rate $[$RATEUP/7]kbit ceil ${ RATEUP}kbit prio 1 tc class add dev $DEV parent 1:1 classid 1:22 htb rate $[$RATEUP/7]kbit ceil ${ RATEUP}kbit prio 2 tc class add dev $DEV parent 1:1 classid 1:23 htb rate $[$RATEUP/7]kbit ceil ${ RATEUP}kbit prio 3 tc class add dev $DEV parent 1:1 classid 1:24 htb rate $[$RATEUP/7]kbit ceil ${ RATEUP}kbit prio 4 tc class add dev $DEV parent 1:1 classid 1:25 htb rate $[$RATEUP/7]kbit ceil ${ RATEUP}kbit prio 5 tc class add dev $DEV parent 1:1 classid 1:26 htb rate $[$RATEUP/7]kbit ceil ${ RATEUP}kbit prio 6 # attach qdisc to leaf classes - here we at SFQ to each priority class. SFQ in sures that # within each class connections will be treated (almost) fairly. tc qdisc add dev $DEV parent 1:20 handle 20: sfq perturb 10 tc qdisc add dev $DEV parent 1:21 handle 21: sfq perturb 10 tc qdisc add dev $DEV parent 1:22 handle 22: sfq perturb 10 tc qdisc add dev $DEV parent 1:23 handle 23: sfq perturb 10 tc qdisc add dev $DEV parent 1:24 handle 24: sfq perturb 10 tc qdisc add dev $DEV parent 1:25 handle 25: sfq perturb 10 tc qdisc add dev $DEV parent 1:26 handle 26: sfq perturb 10 # filter traffic into classes by fwmark - here we direct traffic into priority class according to # the fwmark set on the packet (we set fwmark with iptables # later). Note that above we've set th e default priority # class to 1:26 so unmarked packets (or packets marked with # unfamiliar IDs) will be defaulted to the lowest priority # class. tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 22 fw flowid 1:22 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 23 fw flowid 1:23 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 24 fw flowid 1:24 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 25 fw flowid 1:25 tc filter add dev $DEV parent 1:0 prio 0 protocol ip handle 26 fw flowid 1:26 # add MYSHAPER-OUT chain to the mangle table in iptables - this sets up the tab le we'll use # to filter and mark packe ts. iptables -t mangle -N MYSHAPER-OUT iptables -t mangle -I POSTROUTING -o $DEV -j MYSHAPER-OUT # add fwmark entries to classify different types of traffic - Set fwmark from 2 0-26 according to # desired class. 20 is highest prio. iptables -t mangle -A MYSHAPER-OUT -p tcp --sport 0:1024 -j MARK --set-mark 23 # Default for low port traffic iptables -t mangle -A MYSHAPER-OUT -p tcp --dport 0:1024 -j MARK --set-mark 23 # "" iptables -t mangle -A MYSHAPER-OUT -p tcp --dport 20 -j MARK --set-mark 26 # ftp-data port, low prio iptables -t mangle -A MYSHAPER-OUT -p tcp --dport 5190 -j MARK --set-mark 23 # aol instant messenger iptables -t mangle -A MYSHAPER-OUT -p icmp -j MARK --set-mark 20 # ICMP (ping) - high prio, impress friends iptables -t mangle -A MYSHAPER-OUT -p udp -j MARK --set-mark 21 # DNS name resolution (small packets) iptables -t mangle -A MYSHAPER-OUT -p tcp --dport ssh -j MARK --set-mark 22 # secure shell iptables -t mangle -A MYSHAPER-OUT -p tcp --sport ssh -j MARK --set-mark 22 # secure shell iptables -t mangle -A MYSHAPER-OUT -p tcp --dport telnet -j MARK --set-mark 22 # telnet (ew...) iptables -t mangle -A MYSHAPER-OUT -p tcp --sport telnet -j MARK --set-mark 22 # telnet (ew...) iptables -t mangle -A MYSHAPER-OUT -p ipv6-crypt -j MARK --set-mark 24 # IPSec - we don't know what the payload is though... iptables -t mangle -A MYSHAPER-OUT -p tcp --sport http -j MARK --set-mark 25 # Local web server iptables -t mangle -A MYSHAPER-OUT -p tcp -m length --length :64 -j MARK --set- mark 21 # small packets (probably just ACKs) iptables -t mangle -A MYSHAPER-OUT -m mark --mark 0 -j MARK --set-mark 26 # redundant- mark any unmarked packets as 26 (low prio) # Done with outbound shaping # #################################################### echo "Outbound shaping added to $DEV. Rate: ${RATEUP}Kbit/sec." # uncomment following line if you only want upstream shaping. # exit #################################################### # # Inbound Shaping (limits total bandwidth to RATEDN) # make sure imq module is loaded modprobe imq numdevs=1 ip link set imq0 up # add qdisc - default low-prio class 1:21 tc qdisc add dev imq0 handle 1: root htb default 21 # add main rate limit classes tc class add dev imq0 parent 1: classid 1:1 htb rate ${RATEDN}kbit # add leaf classes - TCP traffic in 21, non TCP traffic in 20 # tc class add dev imq0 parent 1:1 classid 1:20 htb rate $[$RATEDN/2]kbit ceil ${ RATEDN}kbit prio 0 tc class add dev imq0 parent 1:1 classid 1:21 htb rate $[$RATEDN/2]kbit ceil ${ RATEDN}kbit prio 1 # attach qdisc to leaf classes - here we at SFQ to each priority class. SFQ in sures that # within each class connections will be treated (almost) fairly. tc qdisc add dev imq0 parent 1:20 handle 20: sfq perturb 10 tc qdisc add dev imq0 parent 1:21 handle 21: red limit 1000000 min 5000 max 100 000 avpkt 1000 burst 50 # filter traffic into classes by fwmark - here we direct traffic into priority class according to # the fwmark set on the packet (we set fwmark with iptables # later). Note that above we've set th e default priority # class to 1:26 so unmarked packets (or packets marked with # unfamiliar IDs) will be defaulted to the lowest priority # class. tc filter add dev imq0 parent 1:0 prio 0 protocol ip handle 20 fw flowid 1:20 tc filter add dev imq0 parent 1:0 prio 0 protocol ip handle 21 fw flowid 1:21 # add MYSHAPER-IN chain to the mangle table in iptables - this sets up the tabl e we'll use # to filter and mark pa ckets. iptables -t mangle -N MYSHAPER-IN iptables -t mangle -I PREROUTING -i $DEV -j MYSHAPER-IN # add fwmark entries to classify different types of traffic - Set fwmark from 2 0-26 according to # desired class. 20 is highest prio. iptables -t mangle -A MYSHAPER-IN -p ! tcp -j MARK --set-mark 20 # Set non-tcp packets to highest priority iptables -t mangle -A MYSHAPER-IN -p tcp -m length --length :64 -j MARK --set-m ark 20 # short TCP packets are probably ACKs iptables -t mangle -A MYSHAPER-IN -p tcp --dport ssh -j MARK --set-mark 20 # secure shell iptables -t mangle -A MYSHAPER-IN -p tcp --sport ssh -j MARK --set-mark 20 # secure shell iptables -t mangle -A MYSHAPER-IN -p tcp --dport telnet -j MARK --set-mark 20 # telnet (ew...) iptables -t mangle -A MYSHAPER-IN -p tcp --sport telnet -j MARK --set-mark 20 # telnet (ew...) iptables -t mangle -A MYSHAPER-IN -m mark --mark 0 -j MARK --set-mark 21 # redundant- mark any unmarked packets as 26 (low prio) # finally, instruct these packets to go through the imq0 we set up above iptables -t mangle -A MYSHAPER-IN -j IMQ # Done with inbound shaping # #################################################### echo "Inbound shaping added to $DEV. Rate: ${RATEDN}Kbit/sec." ______________________________________________________________ 5. 测试 最简单的方法是用 low-priority 的通讯使upstream饱和.这依据您的级别设定. 比如,将ping和telnet通讯设定为最优先级别(lower fwmark). 如果您让FTP上载 饱和 upstream 的带宽, 您只要关心ping往闸道的时间(on the other side of the DSL line) 增加一些数量同没有队列的情况相比较.Ping 的反应在 100ms 以下(依据您的设定). 如果多出1,2秒 ,表示有些地方不对劲. 6. OK It Works!! Now What? 接下来, 接下来就使出您能想得到的各种花招来"享受"它带来的好处吧! Now that you've successfully started to manage your bandwidth, you should start thinking of ways to use it. After all, you're probably paying for it! * Use a Gnutella client and SHARE YOUR FILES without adversely affecting your network performance * Run a web server without having web page hits slow you down in Quake References 1. http://www.tldp.org/ 2. http://jared.sonicspike.net/mailman/listinfo/adsl-qos 3. http://luxik.cdi.cz/~devik/qos/htb/ 4. http://luxik.cdi.cz/~patrick/imq/ 5. http://www.lartc.org/ 6. http://www.lartc.org/