--- /dev/null
+{
+ "01" : { "uri" : "/images/sponsors/isp42.png",
+ "title" : "<span>Hosting</span>-Sponsor",
+ "link" : "http://www.isp42.de/" }
+}
+++ /dev/null
-<?xml version="1.0" encoding="utf-8"?>
-<rss version="2.0">
-
- <channel>
- <title>IPFire.org - News</title>
- <link>http://www.ipfire.org/</link>
- <description>Kurze Beschreibung des Feeds</description>
- <language>en</language>
- <copyright>Michael Tremer</copyright>
- <pubDate>Thu, 8 Nov 2007 00:00:00 +0200</pubDate>
- <!--  -->
-
- <item>
- <title>IPFire 2.3 Beta 5</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-10</guid>
- <pubDate>Mon, 13 Oct 2008 19:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- <b>Dear Community!</b><br />
- This day, we released the fifth beta version of IPFire 2.3.
- <br />
- Further information and a discussion about that is to find in the forum:
- <a href="http://forum.ipfire.org/index.php/topic,788.0.html" target="_blank">Click!</a>
- <br />
- We hope that many of you will install this new version and give some feedback.
- <br />
- <br />
- Michael for the team of IPFire
- ]]>
- </description>
- </item>
-
- <item>
- <title>Presentation of our project on kbarthel.de</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-07</guid>
- <pubDate>Thu, 22 Aug 2008 10:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- <b>Dear Community!</b><br />
- Kim Barthel published on his blog a text about the project ipfire itself.
- <br />
- You may view the full (german, sorry) article on
- <a href="http://blog.kbarthel.de/?p=148" target="_blank">blog.kbarthel.de</a>!
- <br />
- <br />
- Thank you, Kim!
- <br />
- <br />
- Michael
- ]]>
- </description>
- </item>
-
- <item>
- <title>IPFire 2.3 Beta 3</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-06</guid>
- <pubDate>Thu, 20 Aug 2008 19:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- <b>Dear Community!</b><br />
- This day, we released the third beta version of IPFire 2.3.
- <br />
- Further information and a discussion about that is to find in the forum:
- <a href="http://forum.ipfire.org/index.php/topic,709.0.html" target="_blank">Click!</a>
- <br />
- We hope that many of you will install this new version and give some feedback.
- <br />
- <br />
- Michael for the team of IPFire
- ]]>
- </description>
- </item>
-
- <item>
- <title>Core Update 16</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-05</guid>
- <pubDate>Thu, 16 Aug 2008 15:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- <b>Hello everybody,</b><br />
- today we are going to release Core Update number 16, the following changes were made:
- <br />
- - Fixed Squid init script showing allready started during boot<br />
- - Fixed LineQualitiy Graph not working for some gateways not responding to ping request<br />
- - Fixed Outgoing FW Logging when using Mode 1<br />
- - Fixed Urlfilter autoupdate url has changed<br />
- - Fixed redirect wrapper not working as expected<br />
- - Fixed smaller CGI issues - for detailed informations see git<br />
- - Updated ntfs-3g to current stable
- ]]>
- </description>
- </item>
-
- <item>
- <title>Article on www.linux-luenen.de</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-04</guid>
- <pubDate>Thu, 30 Jul 2008 18:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- Today, the linux user group from "Lünen"
- released an article about ipfire.
- <a href="http://www.linux-luenen.de/?q=node/15" target="_blank">to the article</a>
- (german)
- ]]>
- </description>
- </item>
-
- <item>
- <title>Core Update 15</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-03</guid>
- <pubDate>Thu, 24 Jul 2008 15:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- Today, we release the core update number 15.
- This is an important update because it will fix
- the latest dns vulnerabilities.
- <a href="http://www.heise-online.co.uk/news/DNS-security-problem-details-released--/111145">Read this for more information.</a>
- Please install this update as soon as possible.
- ]]>
- </description>
- </item>
-
- <item>
- <title>IPFire's first rss feed</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-02</guid>
- <pubDate>Thu, 24 Jul 2008 12:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- Now, the ipfire project has got it's own rss feed.
- This feed is for you, to keep you up on latest
- security updates and releases.
- ]]>
- </description>
- </item>
-
- <item>
- <title>IPFire 2.1 is out</title>
- <link>http://www.ipfire.org/#news</link>
- <author>ms@ipfire.org (Michael Tremer)</author>
- <guid>http://www.ipfire.org/#news-01</guid>
- <pubDate>Thu, 8 Nov 2007 12:00:00 +0200</pubDate>
- <description>
- <![CDATA[
- Today, we released the final version of
- "IPFire 2.1".
- ]]>
- </description>
- </item>
-
- </channel>
-
-</rss>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<Site>
- <Config>
- <Title lang="en">Links</Title>
- <Title lang="de">Links</Title>
- </Config>
- <Paragraphs>
- <Paragraph>
- <Heading>Links</Heading>
-
- <Content lang="en"><![CDATA[
- On this page, one can find a lot of external information about the <strong>IPFire-Project</strong>.
- There are some links to our partners, friends or sponsors and of course references to articles by
- some magazines.
- ]]></Content>
- <Content lang="de"><![CDATA[
- Hier findet ihr alle relevanten Links zum Projekt IPFire. Neben Partnerseiten, Freunden und
- Sponsoren findet ihr auch Artikel über IPFire die von Benutzern oder anderen Projektseiten
- und Zeitschriften verfasst wurden.
- ]]></Content>
- </Paragraph>
- <Paragraph>
- <Heading>Friends of IPFire</Heading>
- <Content lang="en"><![CDATA[
- The following users do great jobs in this project. They support us by giving servers to us or
- they do mirror the web pages on their own server. Because IPFire is growing fast, we need much
- of such capacity.
- ]]></Content>
- <Content lang="de"><![CDATA[
- Folgende unter aufgelistete User ermöglichen dem Projekt die Nutzung ihrer Server (Mirror, Build
- und Root Server). An dieser Stelle noch mal ein großes Dankeschön für euren Beitrag. Die ständig
- steigende Benutzerzahl verlangt immer schnellere Anbindungen und Datenvolumina.
- ]]></Content>
- <Content><![CDATA[
- <table width="100%" cellspacing="20px">
- <tr>
- <td><a href="http://www.firewall-service.com" target="_blank">http://www.firewall-service.com</a></td>
- <td>Rene Zingel</td>
- </tr>
- <tr>
- <td><a href="http://www.rowie.at" target="_blank">http://www.rowie.at</a></td>
- <td>Ronald Wiesinger</td>
- </tr>
- <tr>
- <td><a href="http://www.scp-systems.ch" target="_blank">http://www.scp-systems.ch</a></td>
- <td>Peter Schaelchli</td>
- </tr>
- <tr>
- <td><a href="http://www.kbarthel.de" target="_blank">http://www.kbarthel.de</a></td>
- <td>Kim Barthel</td>
- </tr>
- <tr>
- <td><a href="http://ipfire.earl-net.com" target="_blank">http://ipfire.earl-net.com</a></td>
- <td>Jan Paul Tücking</td>
- </tr>
- <tr>
- <td>Seite im Aufbau</td>
- <td>Sebastian Winter</td>
- </tr>
- </table>
- ]]></Content>
-
- </Paragraph>
- <Paragraph>
- <Heading lang="en">IPFire in Media</Heading>
- <Heading lang="de">IPFire in den Medien (diverse Zeitschriften)</Heading>
- <Content lang="en"><![CDATA[
- Often, there are some magazines publishing articles about IPFire.
- This is a short list to online-versions of them (Mostly in German):
- ]]></Content>
- <Content lang="de"><![CDATA[
- Immer öfter berichten uns User und Teammitglieder von diversen Zeitschriften in denen IPFire
- erwähnt wird. Hier sind Einige davon:
- ]]></Content>
- <Content><![CDATA[
- <a href="http://linuxmini.blogspot.com/2007/10/ipfire-free-firewall-for-your-home-or.html">http://linuxmini.blogspot.com/2007/10/ipfire-free-firewall-for-your-home-or.html</a><br />
- <a href="http://www.pro-linux.de/news/2006/9219.html">http://www.pro-linux.de/news/2006/9219.html</a><br />
- <a href="http://www.kriptopolis.org/ipfire">http://www.kriptopolis.org/ipfire</a><br />
- <a href="http://www.pcmagazine.com.tr/dow71,17@2500.html">http://www.pcmagazine.com.tr/dow71,17@2500.html</a><br />
- <a href="http://freedommafia.net/main/index.php?option=com_content&task=view&id=103&Itemid=47">http://freedommafia.net/main/index.php?option=com_content&task=view&id=103&Itemid=47</a><br />
- <a href="http://www.lintelligence.de/news/1026">http://www.lintelligence.de/news/1026</a><br />
- <a href="http://www.techmonkey.de/2008/09/15/ipfire-der-nachste-star-am-soho-himmel/">http://www.techmonkey.de/2008/09/15/ipfire-der-nachste-star-am-soho-himmel/</a><br />
- ]]></Content>
- </Paragraph>
- <Paragraph>
- <Heading lang="en">Discussion about IPFire</Heading>
- <Heading lang="de">Boards und Foren (Diskussionen über IPFire)</Heading>
- <Content lang="en"><![CDATA[
- Users' recommendations do best! - This are links to threads in boards where users talk about IPFire:
- ]]></Content>
- <Content lang="de"><![CDATA[
- Mundpropaganda sagt man, ist die beste Werbung! Hier ein paar Boards und Foren, wo man sich
- gepflegt über IPFire unterhält und Erfahrungen austauscht.
- ]]></Content>
- <Content><![CDATA[
- <a href="http://forum.linuxcast.eu/viewtopic.php?f=13&p=438">http://forum.linuxcast.eu/viewtopic.php?f=13&p=438</a><br />
- <a href="http://forum.golem.de/read.php?26129,1364598,1364598#msg-1364598">http://forum.golem.de/read.php?26129,1364598,1364598#msg-1364598</a><br />
- <a href="http://www.ipcop-forum.de/forum/viewtopic.php?f=28&t=21055&hilit=IPFire">http://www.ipcop-forum.de/forum/viewtopic.php?f=28&t=21055&hilit=IPFire</a><br />
- <a href="http://forum.cdrinfo.pl/f102/jaki-dysk-sieciowy-78524/">http://forum.cdrinfo.pl/f102/jaki-dysk-sieciowy-78524/</a><br />
- <a href="http://forum.mini-pc-pro.de/projekt-forum/3681-epia-ipcop-router-projekt-wirft-mir-diverse-fragen-auf.html">http://forum.mini-pc-pro.de/projekt-forum/3681-epia-ipcop-router-projekt-wirft-mir-diverse-fragen-auf.html</a><br />
- <a href="http://nachtwandler.blogage.de/entries/2008/10/4/IPFire">http://nachtwandler.blogage.de/entries/2008/10/4/IPFire</a><br />
- <a href="http://zahlenzerkleinerer.de/1085/der-erste-ipfire-test.html">http://zahlenzerkleinerer.de/1085/der-erste-ipfire-test.html</a><br />
- ]]></Content>
- </Paragraph>
- <Paragraph>
- <Heading lang="en">Sites that link to here</Heading>
- <Heading lang="de">Nach IPFire verlinkende Seiten</Heading>
- <Content lang="en"><![CDATA[
- We are glad to give a list of sites that link to us. So, we would do this back again:
- ]]></Content>
- <Content lang="de"><![CDATA[
- Am meisten Freuen wir uns aber über Seitenbetreiber die einen Link auf unsere Projektseite setzen,
- um auch andere User auf uns hinzuweisen. Hier ein paar davon:
- ]]></Content>
- <Content><![CDATA[
- <a href="http://www.linux-luenen.de/?q=node/9">http://www.linux-luenen.de/?q=node/9</a><br />
- <a href="http://www.ohloh.net/projects/ipfire">http://www.ohloh.net/projects/ipfire</a><br />
- <a href="http://forum.softgil.com/weblinks.php?cat_id=1">http://forum.softgil.com/weblinks.php?cat_id=1</a><br />
- ]]></Content>
- <Content lang="en"><![CDATA[
- If there are links on site that are not known to the project, yet, please get in touch with us!
- ]]></Content>
- <Content lang="de"><![CDATA[
- Solltet ihr weitere relevante Links zum Projekt IPFire irgendwo entdecken, lasst es und bitte
- wissen (post im Forum oder IRC). Für jede Hilfe sind wir dankbar!
- ]]></Content>
- </Paragraph>
- </Paragraphs>
- <Sidebar>
- <Paragraph>
- <Heading><![CDATA[]]></Heading>
-
- <Content lang="en"><![CDATA[
- ]]></Content>
-
- <Content lang="de"><![CDATA[
- ]]></Content>
- </Paragraph>
- </Sidebar>
-</Site>
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<Site>
- <Posts>
- <Post>
- <Date>08/11/2008</Date>
- <Id>news-12</Id>
- <Heading>IPFire 2.3 Final</Heading>
- <Subtitle>Get it quick</Subtitle>
- <Content lang="en"><![CDATA[
- <b>Dear Community!</b><br />
- This day, we released the final version of IPFire 2.3.
- <br />
- <br />
- Major changes since the first version 2.1 in Oktober 2007
- <br />
- <ul>
- <li class="first">DNS-Securityupdate and many more packet updates</li>
- <li>Enhancement of the packet manager</li>
- <li>Improvement of the Quality-of-Service rules - Presets for QoS</li>
- <li>Adjustable Firewall Logging</li>
- <li>Kernel-Modules for better hardware suport</li>
- <li>Change the system statistic to „collectd"</li>
- <li>Better disk handling (S.M.A.R.T. and Standby)</li>
- <li>Status- and Serviceview in the Webinterface</li>
- <li>Proxy and Redirector now work more dynamic</li>
- </ul>
- <br />
- <br />
- In addition the 2.3 will change the following things
- <br />
- <ul>
- <li class="first">Kernel update to Linux-2.6.25.19</li>
- <li>Update of many more packets (OpenSSL, OpenSSH, Apache, Squid, Snort, collectd, ntfs-3g, Openswan, Updatexlrator, iptables, l7protocols)</li>
- <li>With severall Atheros Chips IPFire is able to work as Wireless Access Point</li>
- <li>Better support of UMTS-3G-Modems</li>
- <li>Use of tmpfs to reduce disk reads and writes</li>
- <li>Better hardware monitoring by the use of lmsensors</li>
- <li>Vnstat Traffic-Accounting replaces ipac-ng</li>
- </ul>
- <br />
- <br />
- The IPFire Team
- ]]></Content>
- <Content lang="de"><![CDATA[
- <b>Sehr geehrte Community!</b><br />
- Heute wurde die Final Version von IPFire 2.3 veröffentlicht.
- <br />
- <br />
- Wesentliche Änderungen seit der ersten Version 2.1 im Oktober 2007
- <br />
- <ul>
- <li class="first">DNS-Sicherheitsupdate und viele weitere Paket-Aktualisierungen</li>
- <li>Erweiterung des Paketmanagers</li>
- <li>Verfeinerung der Quality-of-Service Regeln - Voreinstellungsmodell für QoS</li>
- <li>Feiner einstellbares Firewall Logging</li>
- <li>Kernel-Module zur Hardwareunterstützung wurden nachgeliefert</li>
- <li>Umstellung der Systemstatistiken auf „collectd"</li>
- <li>Verbessertes Festplatten-Handling (S.M.A.R.T. und Standby)</li>
- <li>Status- und Serviceübersicht im Webinterface</li>
- <li>Proxy und Redirector arbeiten dynamischer zusammen</li>
- </ul>
- <br />
- <br />
- Mit der 2.3 wird sich zusätzlich folgendes ändern
- <br />
- <ul>
- <li class="first">Der Kernel wurde auf Linux-2.6.25.19 aktualisiert</li>
- <li>Aktualisierungen von weiteren Paketen (OpenSSL, OpenSSH, Apache, Squid, Snort, collectd, ntfs-3g, Openswan, Updatexlrator, iptables, l7protocols)</li>
- <li>Mit einer passenden WLAN-Karte kann der IPFire als Access-Point für WLAN-Clients dienen</li>
- <li>Bessere Unterstützung für UMTS-3G-Modems</li>
- <li>Verwendung von tmpfs zur Reduzierung von Schreibzugriffen</li>
- <li>Verbesserte Hardwareüberwachung durch Lmsensors</li>
- <li>Vnstat Traffic-Accounting ersetzt ipac-ng</li>
- </ul>
- <br />
- <br />
- Das IPFire-Team
- ]]></Content>
- </Post>
- </Posts>
-</Site>
+++ /dev/null
- <br class="clear" />
- </td>
- <td id="sh-rgt"></td>
- </tr>
- <tr>
- <td id="sh-bl"></td>
- <td id="sh-btn"></td>
- <td id="sh-br"></td>
- </tr>
- </table>
- </div>
- </div>
- <div id="footer" class="fixed2">
- Copyright © 2008 IPFire.org. All rights reserved. <a href="/imprint/%(lang)s">Imprint</a>
- </div>
- </body>
-</html>
+++ /dev/null
-#!/usr/bin/python
-
-import os
-import re
-import cgi
-
-# Check language...
-language = "en"
-try:
- if re.search(re.compile("^de(.*)"), os.environ["HTTP_ACCEPT_LANGUAGE"]):
- language = "de"
-except KeyError:
- pass
-
-index = cgi.FieldStorage().getfirst("file") or "index"
-
-sites = (
- ("ipfire.org", ("www.ipfire.org", None)),
- ("www.ipfire.org", (None, index + "/%s" % language)),
- ("source.ipfire.org", ("www.ipfire.org", "source/" + language)),
- ("tracker.ipfire.org", ("www.ipfire.org", "tracker/" + language)),
- ("download.ipfire.org", ("www.ipfire.org", "download/" + language)),
- ("people.ipfire.org", ("wiki.ipfire.org", language + "/people/start")),
- )
-
-print "Status: 302 Moved"
-print "Pragma: no-cache"
-
-location = ""
-
-for (servername, destination) in sites:
- if servername == os.environ["SERVER_NAME"]:
- if destination[0]:
- location = "http://%s" % destination[0]
- if destination[1]:
- location += "/%s" % destination[1]
- break
-
-print "Location: %s" % location
-print # End the header
#!/usr/bin/python
-# -*- coding: utf-8 -*-
-import os
import sys
import cgi
+import imputil
-sys.path.append(os.environ['DOCUMENT_ROOT'])
+from web import Page
-import xml.dom.minidom
+site = cgi.FieldStorage().getfirst("site") or "main"
-class Error404(Exception):
- pass
+sys.path = [ "pages",] + sys.path
+for page in (site, "static"):
+ try:
+ found = imputil.imp.find_module(page)
+ loaded = imputil.imp.load_module(page, found[0], found[1], found[2])
+ content = loaded.__dict__["Content"]
+ sidebar = loaded.__dict__["Sidebar"]
+ break
+ except ImportError, e:
+ pass
-class Error500(Exception):
- pass
+c = content(site)
+s = sidebar(site)
-class SItem:
- def __init__(self, xml, page, lang):
- self.xml = xml
- self.page = page
- self.lang = lang
-
- self.data = u""
-
- def write(self, s):
- self.data += s #.encode("utf-8")
-
- def read(self):
- return self.data
-
-
-class Menu(SItem):
- def __init__(self, file, page, lang):
- SItem.__init__(self, Xml(file, lang), page, lang)
- self.xml.load()
-
- self.items = XItem(self.xml.dom).childs("Item")
-
- def __call__(self):
- self.write("""<div id="menu"><ul>""")
- for item in self.items:
- uri = item.attr("uri")
- active = ""
- if self.page == uri:
- active = "class=\"active\""
-
- if not uri.startswith("http://"):
- uri = "/%s/%s" % (uri, self.lang)
-
- for name in item.childs("Name"):
- if name.attr("lang") in (self.lang, ""):
- self.write("<li><a %s href=\"%s\">%s</a></li>" % \
- (active, uri, name.text()))
- self.write("</ul></div>")
- return self.read()
-
-
-class Body(SItem):
- def __init__(self, xml, page, lang):
- SItem.__init__(self, xml, page, lang)
-
- self.paragraphs = XItem(self.xml.dom, "Paragraphs").childs("Paragraph")
-
- self.news = News("news", self.page, self.lang)
-
- def __call__(self):
- self.write("""<div id="primaryContent_2columns">
- <div id="columnA_2columns">""")
- for paragraph in self.paragraphs:
- for heading in paragraph.childs("Heading"):
- if heading.attr("lang") in (self.lang, ""):
- self.write('<h3>' + heading.text() + '</h3><a name="' + heading.text() +'"></a>')
- for content in paragraph.childs("Content"):
- if content.attr("lang") in (self.lang, ""):
- if content.attr("raw"):
- self.write(content.text())
- else:
- self.write("<p>" + content.text() + "</p>\n")
- self.write("""<br class="clear" />\n""")
-
- if self.page in ("index", "news",):
- self.write(self.news(3))
- self.write("""</div></div>""")
- return self.read()
-
-
-class News(SItem):
- def __init__(self, file, page, lang):
- SItem.__init__(self, Xml(file, lang), page, lang)
- self.xml.load()
-
- self.posts = XItem(self.xml.dom).childs("Posts")
-
- def __call__(self, limit=None):
- a = 1
- for post in self.posts:
- self.write("""<div class="post">""")
- for id in post.childs("Id"):
- self.write("""<a name="%s"></a>""" % id.text())
- for heading in post.childs("Heading"):
- if heading.attr("lang") in (self.lang, ""):
- self.write("""<h3>%s - %s</h3>""" % (post.childs("Date")[0].text(), heading.text()))
- for subtitle in post.childs("Subtitle"):
- if subtitle.attr("lang") in (self.lang, ""):
- self.write("""<ul class="post_info">
- <li class="date">%s</li></ul>""" % \
- subtitle.text())
- for content in post.childs("Content"):
- if content.attr("lang") in (self.lang, ""):
- if content.attr("raw"):
- self.write(content.text())
- else:
- self.write("<p>" + content.text() + "</p>\n")
- self.write("""</div>""")
- a += 1
- if limit and a > limit:
- break
- return self.read()
-
-
-class Sidebar(SItem):
- def __init__(self, xml, page, lang):
- SItem.__init__(self, xml, page, lang)
-
- self.paragraphs = XItem(self.xml.dom, "Sidebar").childs("Paragraph")
-
- def __call__(self):
- self.write("""<div id="secondaryContent_2columns">
- <div id="columnC_2columns">""")
- for post in self.paragraphs:
- for heading in post.childs("Heading"):
- if heading.attr("lang") in (self.lang, ""):
- self.write("<h4>" + heading.text() + "</h4>")
- for content in post.childs("Content"):
- if content.attr("lang") in (self.lang, ""):
- if content.attr("raw"):
- self.write(content.text())
- else:
- self.write("<p>" + content.text() + "</p>\n")
- self.write("""</div></div>""")
- return self.read()
-
-
-class XItem:
- def __init__(self, dom, node=None):
- self.dom = self.node = dom
- if node:
- self.node = self.dom.getElementsByTagName(node)[0]
- self.lang = lang
-
- def attr(self, name):
- return self.node.getAttribute(name).strip()
-
- def text(self):
- ret = ""
- for i in self.node.childNodes:
- ret = ret + i.data
- return ret
-
- def element(self, name):
- return XItem(self.node, name)
-
- def childs(self, name):
- ret = []
- for i in self.node.getElementsByTagName(name):
- ret.append(XItem(i))
- return ret
-
-
-class Xml:
- def __init__(self, page, lang):
- self.page = page
- self.lang = lang
-
- self.path = None
-
- self.data = None
- self.dom = None
-
- self._config = {}
-
- def load(self):
- self.path = \
- os.path.join(os.path.dirname(os.environ['SCRIPT_FILENAME']), "data/%s.xml" % self.page)
- try:
- f = open(self.path)
- self.data = f.read()
- f.close()
- self.dom = \
- xml.dom.minidom.parseString(self.data).getElementsByTagName("Site")[0]
- #except IOError:
- #self.page = "404"
- #self.load()
- # raise Error404
- except:
- #self.page = "500"
- #self.load()
- raise
-
- def config(self):
- elements = ("Title", "Columns",)
- for element in elements:
- self._config[element.lower()] = ""
-
- config = XItem(self.dom, "Config")
- for element in elements:
- for lang in config.childs(element):
- if lang.attr("lang") == self.lang:
- self._config[element.lower()] = lang.text()
- return self._config
-
-
-class Site:
- def __init__(self, page, lang="en"):
- self.code = "200 - OK"
- self.mime = "text/html"
-
- self.page = page
- self.lang = lang
- self.xml = Xml(page=page, lang=lang)
-
- self.data = u""
-
- self.menu = Menu("../menu", self.page, self.lang)
-
- self.config = { "document_name" : page,
- "lang" : self.lang,
- "menu" : self.menu() }
-
- try:
- self.xml.load()
- except Error404:
- self.code = "404 - Not found"
- #except:
- # self.code = "500 - Internal Server Error"
-
- def write(self, s):
- self.data += s #.encode("utf-8")
-
- def include(self, file):
- f = open(file)
- data = f.read()
- f.close()
- self.write(data % self.config)
-
- def prepare(self):
- for key, val in self.xml.config().items():
- self.config[key] = val
-
- self.config["title"] = "%s - %s" % \
- (os.environ["SERVER_NAME"], self.config["title"],)
-
- self.body = Body(self.xml, self.page, self.lang)
- self.sidebar = Sidebar(self.xml, self.page, self.lang)
-
- def run(self):
- # First, return the http header
- print "Status: %s" % self.code
- print "Content-Type: %s" % self.mime
- print # End header
-
- # Include the site's header
- self.include("header.inc")
-
- # Import body and side elements
- self.write(self.body())
- self.write(self.sidebar())
-
- # Include the site's footer
- self.include("footer.inc")
-
- return self.data.encode("utf-8")
-
-
-page = cgi.FieldStorage().getfirst("page")
-lang = cgi.FieldStorage().getfirst("lang")
-
-if not lang:
- lang = "en"
-
-site = Site(page=page, lang=lang)
-site.prepare()
-
-print site.run()
+p = Page(site, c, s)
+p()
--- /dev/null
+{
+ "10" : { "uri" : "/index",
+ "name" : "Home" },
+ "20" : { "uri" : "/download",
+ "name" : "Downloads" },
+ "30" : { "uri" : "http://wiki.ipfire.org/",
+ "name" : { "de" : "Wiki", "en" : "Docs" }},
+ "40" : { "uri" : "http://forum.ipfire.org/",
+ "name" : "Forum" },
+ "50" : { "uri" : "/development",
+ "name" : { "de" : "Entwicklung", "en" : "Development" }},
+ "60" : { "uri" : "/links",
+ "name" : "Links" }
+}
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?>
-<Site>
- <Item uri="index">
- <Name lang="en">Home</Name>
- <Name lang="de">Start</Name>
- </Item>
- <Item uri="download">
- <Name>Downloads</Name>
- </Item>
- <Item uri="http://wiki.ipfire.org/">
- <Name lang="en">Docs</Name>
- <Name lang="de">Wiki</Name>
- </Item>
- <Item uri="http://forum.ipfire.org/">
- <Name>Forum</Name>
- </Item>
- <Item uri="development">
- <Name lang="en">Development</Name>
- <Name lang="de">Entwicklung</Name>
- </Item>
- <Item uri="links">
- <Name lang="en">Links</Name>
- <Name lang="de">Links</Name>
- </Item>
-
-</Site>
--- /dev/null
+{
+ "1" : { "author" : "Michael Tremer",
+ "subject" : "IPFire 2.3 Final",
+ "date" : "2008-11-08",
+ "content" :
+ { "en" : "<p><strong>Dear Community!</strong></p>
+ <p>This day, we released the final version of IPFire 2.3.</p>
+ <p>Major changes since the first version 2.1 in Oktober 2007:</p>
+ <ul>
+ <li class=\"first\">DNS-Securityupdate and many more packet updates</li>
+ <li>Enhancement of the packet manager</li>
+ <li>Improvement of the Quality-of-Service rules - Presets for QoS</li>
+ <li>Adjustable Firewall Logging</li>
+ <li>Kernel-Modules for better hardware suport</li>
+ <li>Change the system statistic to collectd</li>
+ <li>Better disk handling (S.M.A.R.T. and Standby)</li>
+ <li>Status- and Serviceview in the Webinterface</li>
+ <li>Proxy and Redirector now work more dynamic</li>
+ </ul>
+ <p>In addition, release 2.3 will change the following things:</p>
+ <ul>
+ <li class=\"first\">Kernel update to Linux-2.6.25.19</li>
+ <li>Update of many more packages (OpenSSL, OpenSSH, Apache, Squid, Snort, collectd, ntfs-3g, Openswan, Updatexlrator, iptables, l7protocols)</li>
+ <li>With several Atheros Chips IPFire is able to work as Wireless Access Point</li>
+ <li>Better support of UMTS-3G-Modems</li>
+ <li>Use of tmpfs to reduce disk reads and writes</li>
+ <li>Better hardware monitoring by the use of lmsensors</li>
+ <li>Vnstat Traffic-Accounting replaces ipac-ng</li>
+ </ul>
+ <p>The IPFire Team</p>",
+ "de" : "<strong>Sehr geehrte Community!</strong><br />
+ <p>Heute wurde die Final Version von IPFire 2.3 veröffentlicht.</p>
+ <p>Wesentliche Änderungen seit der ersten Version 2.1 im Oktober 2007:</p>
+ <ul>
+ <li class=\"first\">DNS-Sicherheitsupdate und viele weitere Paket-Aktualisierungen</li>
+ <li>Erweiterung des Paketmanagers</li>
+ <li>Verfeinerung der Quality-of-Service Regeln - Voreinstellungsmodell für QoS</li>
+ <li>Feiner einstellbares Firewall Logging</li>
+ <li>Kernel-Module zur Hardwareunterstützung wurden nachgeliefert</li>
+ <li>Umstellung der Systemstatistiken auf collectd</li>
+ <li>Verbessertes Festplatten-Handling (S.M.A.R.T. und Standby)</li>
+ <li>Status- und Serviceübersicht im Webinterface</li>
+ <li>Proxy und Redirector arbeiten dynamischer zusammen</li>
+ </ul>
+ <p>Mit der 2.3 wird sich zusätzlich folgendes ändern:</p>
+ <ul>
+ <li class=\"first\">Der Kernel wurde auf Linux-2.6.25.19 aktualisiert</li>
+ <li>Aktualisierungen von weiteren Paketen (OpenSSL, OpenSSH, Apache, Squid, Snort, collectd, ntfs-3g, Openswan, Updatexlrator, iptables, l7protocols)</li>
+ <li>Mit einer passenden WLAN-Karte kann der IPFire als Access-Point für WLAN-Clients dienen</li>
+ <li>Bessere Unterstützung für UMTS-3G-Modems</li>
+ <li>Verwendung von tmpfs zur Reduzierung von Schreibzugriffen</li>
+ <li>Verbesserte Hardwareüberwachung durch Lmsensors</li>
+ <li>Vnstat Traffic-Accounting ersetzt ipac-ng</li>
+ </ul>
+ <p>Das IPFire-Team</p>" }}
+}
--- /dev/null
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import os
+from xml.dom.minidom import parseString
+
+import web
+
+class Xml:
+ def __init__(self, file):
+ file = "%s/pages/static/%s.xml" % (os.getcwd(), file,)
+ f = open(file)
+ data = f.read()
+ f.close()
+
+ self.xml = parseString(data).getElementsByTagName("Site")[0]
+
+ def getAttribute(self, node, attr):
+ return node.getAttribute(attr).strip()
+
+ def getText(self, node):
+ ret = ""
+ for i in node.childNodes:
+ ret += i.data
+ return ret.encode("utf-8")
+
+
+class Content(Xml):
+ def __init__(self, file,):
+ Xml.__init__(self, file)
+
+ def __call__(self, lang="en"):
+ ret = ""
+ for paragraphs in self.xml.getElementsByTagName("Paragraphs"):
+ for paragraph in paragraphs.getElementsByTagName("Paragraph"):
+ if self.getAttribute(paragraph, "news") == "1":
+ news = web.News(int(self.getAttribute(paragraph, "count")))
+ ret += news(lang).encode("utf-8")
+ continue
+
+ # Heading
+ for heading in paragraph.getElementsByTagName("Heading"):
+ if self.getAttribute(heading, "lang") == lang or \
+ not self.getAttribute(heading, "lang"):
+ heading = self.getText(heading)
+ break
+
+ b = web.Box(heading)
+
+ # Content
+ for content in paragraph.getElementsByTagName("Content"):
+ if self.getAttribute(content, "lang") == lang or \
+ not self.getAttribute(content, "lang"):
+ if self.getAttribute(content, "raw") == "1":
+ s = self.getText(content)
+ else:
+ s = "<p>%s</p>" % self.getText(content)
+ b.w(s)
+
+ ret += b()
+ return ret
+
+
+class Sidebar(Xml):
+ def __init__(self, file):
+ Xml.__init__(self, file)
+
+ def __call__(self, lang="en"):
+ ret = ""
+ sidebar = self.xml.getElementsByTagName("Sidebar")[0]
+ for paragraph in sidebar.getElementsByTagName("Paragraph"):
+ if self.getAttribute(paragraph, "banner") == "1":
+ b = web.Banners()
+ ret += """<h4>%(title)s</h4><a href="%(link)s" target="_blank">
+ <img src="%(uri)s" /></a>""" % b.random()
+ continue
+
+ # Heading
+ for heading in paragraph.getElementsByTagName("Heading"):
+ if self.getAttribute(heading, "lang") == lang or \
+ not self.getAttribute(heading, "lang"):
+ heading = self.getText(heading)
+ break
+
+ ret += "<h4>%s</h4>" % heading
+
+ # Content
+ for content in paragraph.getElementsByTagName("Content"):
+ if self.getAttribute(content, "lang") == lang or \
+ not self.getAttribute(content, "lang"):
+ if self.getAttribute(content, "raw") == "1":
+ s = self.getText(content)
+ else:
+ s = "<p>%s</p>" % self.getText(content)
+ ret += s
+
+ return ret
]]></Content>
</Paragraph>
+ <Paragraph banner="1" />
+
</Sidebar>
</Site>
<li>Voice-over-IP solution with Asterisk and Teamspeak plus traffic prioritization</li>
<li>Multimedia addons (video- & audio-streaming, jukebox)</li>
<li>WLan Access Point with hostap and many Atheros Chips</li>
- <li>and many more - <a href="http://wiki.ipfire.org/Addons" target="_blank">List of all Addons</a></li>
+ <li>and many more - <a href="http://wiki.ipfire.org/en/addons/start" target="_blank">List of all Addons</a></li>
</ul>
]]></Content>
<Content lang="de" raw="1"><![CDATA[
<li>Voice-over-IP-Lösung mittels Asterisk und Teamspeak, sowie Traffic-Priorisierung</li>
<li>Multimedia-Addons (Video- & Audio-Streaming, Jukebox)</li>
<li>WLan Access Point mittels hostap für viele Atheros Chips</li>
- <li>und weiteren Addons - <a href="http://wiki.ipfire.org/Addons" target="_blank">Alle Addons im Wiki</a></li>
+ <li>und weiteren Addons - <a href="http://wiki.ipfire.org/de/addons/start" target="_blank">Alle Addons im Wiki</a></li>
</ul>
- ]]></Content>
-
+ ]]></Content>
</Paragraph>
+
+ <Paragraph news="1" count="3" />
</Paragraphs>
<Sidebar>
</form>
]]></Content>
</Paragraph>
+
+ <Paragraph banner="1" />
+ <!--
<Paragraph>
<Heading lang="en"><![CDATA[About</span> us]]></Heading>
<Heading lang="de"><![CDATA[<span>Über</span> uns]]></Heading>
aus einem Guss zu erstellen.
]]></Content>
</Paragraph>
+ -->
<Paragraph>
<Heading><![CDATA[<span>RSS</span> feed]]></Heading>
--- /dev/null
+#!/usr/bin/python
+
+TRACKER_URL ="http://tracker.ipfire.org:6969/stats?format=txt&mode=tpbs"
+TORRENT_BASE="/srv/pakfire/data/torrent"
+
+import os
+import sha
+import urllib2
+
+from client.bencode import bencode, bdecode
+import web
+
+class TrackerInfo:
+ def __init__(self, url):
+ self.info = {}
+
+ f = urllib2.urlopen(url)
+ for line in f.readlines():
+ (hash, seeds, peers,) = line.split(":")
+ self.info[hash] = (seeds, peers.rstrip("\n"),)
+ f.close()
+
+ def __call__(self):
+ print self.info
+
+ def get(self, hash):
+ try:
+ return self.info[hash]
+ except KeyError:
+ return 0, 0
+
+
+class TorrentObject:
+ def __init__(self, file):
+ self.name = os.path.basename(file)
+ f = open(file, "rb")
+ self.info = bdecode(f.read())
+ f.close()
+
+ def __call__(self):
+ print "File : %s" % self.get_file()
+ print "Hash : %s" % self.get_hash()
+
+ def get_hash(self):
+ return sha.sha(bencode(self.info["info"])).hexdigest().upper()
+
+ def get_file(self):
+ return self.name
+
+
+torrent_files = []
+for file in os.listdir(TORRENT_BASE):
+ if not file.endswith(".torrent"):
+ continue
+ file = os.path.join(TORRENT_BASE, file)
+ torrent_files.insert(0, TorrentObject(file))
+
+
+tracker = TrackerInfo(TRACKER_URL)
+
+class TorrentBox(web.Box):
+ def __init__(self, file):
+ web.Box.__init__(self, file.name, file.get_hash())
+ self.w("""
+ <p>
+ <strong>Seeders:</strong> %s<br />
+ <strong>Leechers:</strong> %s
+ </p>""" % tracker.get(file.get_hash()))
+ self.w("""
+ <p style="text-align: right;">
+ <a href="http://download.ipfire.org/torrent/%s">Download</a>
+ </p>""" % (file.name,))
+
+
+class Content(web.Content):
+ def __init__(self, name):
+ web.Content.__init__(self, name)
+
+ def content(self):
+ self.w("<h3>IPFire Torrent Tracker</h3>")
+ for t in torrent_files:
+ b = TorrentBox(t)
+ self.w(b())
+
+Sidebar = web.Sidebar
--- /dev/null
+#written by John Hoffman
+
+from inifile import ini_write, ini_read
+from bencode import bencode, bdecode
+from types import IntType, LongType, StringType, FloatType
+from CreateIcons import GetIcons, CreateIcon
+from parseargs import defaultargs
+from __init__ import product_name, version_short
+import sys,os
+from time import time, strftime
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+try:
+ realpath = os.path.realpath
+except:
+ realpath = lambda x:x
+OLDICONPATH = os.path.abspath(os.path.dirname(realpath(sys.argv[0])))
+
+DIRNAME = '.'+product_name
+
+hexchars = '0123456789abcdef'
+hexmap = []
+revmap = {}
+for i in xrange(256):
+ x = hexchars[(i&0xF0)/16]+hexchars[i&0x0F]
+ hexmap.append(x)
+ revmap[x] = chr(i)
+
+def tohex(s):
+ r = []
+ for c in s:
+ r.append(hexmap[ord(c)])
+ return ''.join(r)
+
+def unhex(s):
+ r = [ revmap[s[x:x+2]] for x in xrange(0, len(s), 2) ]
+ return ''.join(r)
+
+def copyfile(oldpath, newpath): # simple file copy, all in RAM
+ try:
+ f = open(oldpath,'rb')
+ r = f.read()
+ success = True
+ except:
+ success = False
+ try:
+ f.close()
+ except:
+ pass
+ if not success:
+ return False
+ try:
+ f = open(newpath,'wb')
+ f.write(r)
+ except:
+ success = False
+ try:
+ f.close()
+ except:
+ pass
+ return success
+
+
+class ConfigDir:
+
+ ###### INITIALIZATION TASKS ######
+
+ def __init__(self, config_type = None):
+ self.config_type = config_type
+ if config_type:
+ config_ext = '.'+config_type
+ else:
+ config_ext = ''
+
+ def check_sysvars(x):
+ y = os.path.expandvars(x)
+ if y != x and os.path.isdir(y):
+ return y
+ return None
+
+ for d in ['${APPDATA}', '${HOME}', '${HOMEPATH}', '${USERPROFILE}']:
+ dir_root = check_sysvars(d)
+ if dir_root:
+ break
+ else:
+ dir_root = os.path.expanduser('~')
+ if not os.path.isdir(dir_root):
+ dir_root = os.path.abspath(os.path.dirname(sys.argv[0]))
+
+ dir_root = os.path.join(dir_root,DIRNAME)
+ self.dir_root = dir_root
+
+ if not os.path.isdir(self.dir_root):
+ os.mkdir(self.dir_root,0700) # exception if failed
+
+ self.dir_icons = os.path.join(dir_root,'icons')
+ if not os.path.isdir(self.dir_icons):
+ os.mkdir(self.dir_icons)
+ for icon in GetIcons():
+ i = os.path.join(self.dir_icons,icon)
+ if not os.path.exists(i):
+ if not copyfile(os.path.join(OLDICONPATH,icon),i):
+ CreateIcon(icon,self.dir_icons)
+
+ self.dir_torrentcache = os.path.join(dir_root,'torrentcache')
+ if not os.path.isdir(self.dir_torrentcache):
+ os.mkdir(self.dir_torrentcache)
+
+ self.dir_datacache = os.path.join(dir_root,'datacache')
+ if not os.path.isdir(self.dir_datacache):
+ os.mkdir(self.dir_datacache)
+
+ self.dir_piececache = os.path.join(dir_root,'piececache')
+ if not os.path.isdir(self.dir_piececache):
+ os.mkdir(self.dir_piececache)
+
+ self.configfile = os.path.join(dir_root,'config'+config_ext+'.ini')
+ self.statefile = os.path.join(dir_root,'state'+config_ext)
+
+ self.TorrentDataBuffer = {}
+
+
+ ###### CONFIG HANDLING ######
+
+ def setDefaults(self, defaults, ignore=[]):
+ self.config = defaultargs(defaults)
+ for k in ignore:
+ if self.config.has_key(k):
+ del self.config[k]
+
+ def checkConfig(self):
+ return os.path.exists(self.configfile)
+
+ def loadConfig(self):
+ try:
+ r = ini_read(self.configfile)['']
+ except:
+ return self.config
+ l = self.config.keys()
+ for k,v in r.items():
+ if self.config.has_key(k):
+ t = type(self.config[k])
+ try:
+ if t == StringType:
+ self.config[k] = v
+ elif t == IntType or t == LongType:
+ self.config[k] = long(v)
+ elif t == FloatType:
+ self.config[k] = float(v)
+ l.remove(k)
+ except:
+ pass
+ if l: # new default values since last save
+ self.saveConfig()
+ return self.config
+
+ def saveConfig(self, new_config = None):
+ if new_config:
+ for k,v in new_config.items():
+ if self.config.has_key(k):
+ self.config[k] = v
+ try:
+ ini_write( self.configfile, self.config,
+ 'Generated by '+product_name+'/'+version_short+'\n'
+ + strftime('%x %X') )
+ return True
+ except:
+ return False
+
+ def getConfig(self):
+ return self.config
+
+
+ ###### STATE HANDLING ######
+
+ def getState(self):
+ try:
+ f = open(self.statefile,'rb')
+ r = f.read()
+ except:
+ r = None
+ try:
+ f.close()
+ except:
+ pass
+ try:
+ r = bdecode(r)
+ except:
+ r = None
+ return r
+
+ def saveState(self, state):
+ try:
+ f = open(self.statefile,'wb')
+ f.write(bencode(state))
+ success = True
+ except:
+ success = False
+ try:
+ f.close()
+ except:
+ pass
+ return success
+
+
+ ###### TORRENT HANDLING ######
+
+ def getTorrents(self):
+ d = {}
+ for f in os.listdir(self.dir_torrentcache):
+ f = os.path.basename(f)
+ try:
+ f, garbage = f.split('.')
+ except:
+ pass
+ d[unhex(f)] = 1
+ return d.keys()
+
+ def getTorrentVariations(self, t):
+ t = tohex(t)
+ d = []
+ for f in os.listdir(self.dir_torrentcache):
+ f = os.path.basename(f)
+ if f[:len(t)] == t:
+ try:
+ garbage, ver = f.split('.')
+ except:
+ ver = '0'
+ d.append(int(ver))
+ d.sort()
+ return d
+
+ def getTorrent(self, t, v = -1):
+ t = tohex(t)
+ if v == -1:
+ v = max(self.getTorrentVariations(t)) # potential exception
+ if v:
+ t += '.'+str(v)
+ try:
+ f = open(os.path.join(self.dir_torrentcache,t),'rb')
+ r = bdecode(f.read())
+ except:
+ r = None
+ try:
+ f.close()
+ except:
+ pass
+ return r
+
+ def writeTorrent(self, data, t, v = -1):
+ t = tohex(t)
+ if v == -1:
+ try:
+ v = max(self.getTorrentVariations(t))+1
+ except:
+ v = 0
+ if v:
+ t += '.'+str(v)
+ try:
+ f = open(os.path.join(self.dir_torrentcache,t),'wb')
+ f.write(bencode(data))
+ except:
+ v = None
+ try:
+ f.close()
+ except:
+ pass
+ return v
+
+
+ ###### TORRENT DATA HANDLING ######
+
+ def getTorrentData(self, t):
+ if self.TorrentDataBuffer.has_key(t):
+ return self.TorrentDataBuffer[t]
+ t = os.path.join(self.dir_datacache,tohex(t))
+ if not os.path.exists(t):
+ return None
+ try:
+ f = open(t,'rb')
+ r = bdecode(f.read())
+ except:
+ r = None
+ try:
+ f.close()
+ except:
+ pass
+ self.TorrentDataBuffer[t] = r
+ return r
+
+ def writeTorrentData(self, t, data):
+ self.TorrentDataBuffer[t] = data
+ try:
+ f = open(os.path.join(self.dir_datacache,tohex(t)),'wb')
+ f.write(bencode(data))
+ success = True
+ except:
+ success = False
+ try:
+ f.close()
+ except:
+ pass
+ if not success:
+ self.deleteTorrentData(t)
+ return success
+
+ def deleteTorrentData(self, t):
+ try:
+ os.remove(os.path.join(self.dir_datacache,tohex(t)))
+ except:
+ pass
+
+ def getPieceDir(self, t):
+ return os.path.join(self.dir_piececache,tohex(t))
+
+
+ ###### EXPIRATION HANDLING ######
+
+ def deleteOldCacheData(self, days, still_active = [], delete_torrents = False):
+ if not days:
+ return
+ exptime = time() - (days*24*3600)
+ names = {}
+ times = {}
+
+ for f in os.listdir(self.dir_torrentcache):
+ p = os.path.join(self.dir_torrentcache,f)
+ f = os.path.basename(f)
+ try:
+ f, garbage = f.split('.')
+ except:
+ pass
+ try:
+ f = unhex(f)
+ assert len(f) == 20
+ except:
+ continue
+ if delete_torrents:
+ names.setdefault(f,[]).append(p)
+ try:
+ t = os.path.getmtime(p)
+ except:
+ t = time()
+ times.setdefault(f,[]).append(t)
+
+ for f in os.listdir(self.dir_datacache):
+ p = os.path.join(self.dir_datacache,f)
+ try:
+ f = unhex(os.path.basename(f))
+ assert len(f) == 20
+ except:
+ continue
+ names.setdefault(f,[]).append(p)
+ try:
+ t = os.path.getmtime(p)
+ except:
+ t = time()
+ times.setdefault(f,[]).append(t)
+
+ for f in os.listdir(self.dir_piececache):
+ p = os.path.join(self.dir_piececache,f)
+ try:
+ f = unhex(os.path.basename(f))
+ assert len(f) == 20
+ except:
+ continue
+ for f2 in os.listdir(p):
+ p2 = os.path.join(p,f2)
+ names.setdefault(f,[]).append(p2)
+ try:
+ t = os.path.getmtime(p2)
+ except:
+ t = time()
+ times.setdefault(f,[]).append(t)
+ names.setdefault(f,[]).append(p)
+
+ for k,v in times.items():
+ if max(v) < exptime and not k in still_active:
+ for f in names[k]:
+ try:
+ os.remove(f)
+ except:
+ try:
+ os.removedirs(f)
+ except:
+ pass
+
+
+ def deleteOldTorrents(self, days, still_active = []):
+ self.deleteOldCacheData(days, still_active, True)
+
+
+ ###### OTHER ######
+
+ def getIconDir(self):
+ return self.dir_icons
--- /dev/null
+#written by John Hoffman
+
+from ConnChoice import *
+from wxPython.wx import *
+from types import IntType, FloatType, StringType
+from download_bt1 import defaults
+from ConfigDir import ConfigDir
+import sys,os
+import socket
+from parseargs import defaultargs
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+try:
+ wxFULL_REPAINT_ON_RESIZE
+except:
+ wxFULL_REPAINT_ON_RESIZE = 0 # fix for wx pre-2.5
+
+if (sys.platform == 'win32'):
+ _FONT = 9
+else:
+ _FONT = 10
+
+def HexToColor(s):
+ r,g,b = s.split(' ')
+ return wxColour(red=int(r,16), green=int(g,16), blue=int(b,16))
+
+def hex2(c):
+ h = hex(c)[2:]
+ if len(h) == 1:
+ h = '0'+h
+ return h
+def ColorToHex(c):
+ return hex2(c.Red()) + ' ' + hex2(c.Green()) + ' ' + hex2(c.Blue())
+
+ratesettingslist = []
+for x in connChoices:
+ if not x.has_key('super-seed'):
+ ratesettingslist.append(x['name'])
+
+
+configFileDefaults = [
+ #args only available for the gui client
+ ('win32_taskbar_icon', 1,
+ "whether to iconize do system try or not on win32"),
+ ('gui_stretchwindow', 0,
+ "whether to stretch the download status window to fit the torrent name"),
+ ('gui_displaystats', 1,
+ "whether to display statistics on peers and seeds"),
+ ('gui_displaymiscstats', 1,
+ "whether to display miscellaneous other statistics"),
+ ('gui_ratesettingsdefault', ratesettingslist[0],
+ "the default setting for maximum upload rate and users"),
+ ('gui_ratesettingsmode', 'full',
+ "what rate setting controls to display; options are 'none', 'basic', and 'full'"),
+ ('gui_forcegreenonfirewall', 0,
+ "forces the status icon to be green even if the client seems to be firewalled"),
+ ('gui_default_savedir', '',
+ "default save directory"),
+ ('last_saved', '', # hidden; not set in config
+ "where the last torrent was saved"),
+ ('gui_font', _FONT,
+ "the font size to use"),
+ ('gui_saveas_ask', -1,
+ "whether to ask where to download to (0 = never, 1 = always, -1 = automatic resume"),
+]
+
+def setwxconfigfiledefaults():
+ CHECKINGCOLOR = ColorToHex(wxSystemSettings_GetColour(wxSYS_COLOUR_3DSHADOW))
+ DOWNLOADCOLOR = ColorToHex(wxSystemSettings_GetColour(wxSYS_COLOUR_ACTIVECAPTION))
+
+ configFileDefaults.extend([
+ ('gui_checkingcolor', CHECKINGCOLOR,
+ "progress bar checking color"),
+ ('gui_downloadcolor', DOWNLOADCOLOR,
+ "progress bar downloading color"),
+ ('gui_seedingcolor', '00 FF 00',
+ "progress bar seeding color"),
+ ])
+
+defaultsToIgnore = ['responsefile', 'url', 'priority']
+
+
+class configReader:
+
+ def __init__(self):
+ self.configfile = wxConfig("BitTorrent",style=wxCONFIG_USE_LOCAL_FILE)
+ self.configMenuBox = None
+ self.advancedMenuBox = None
+ self._configReset = True # run reset for the first time
+
+ setwxconfigfiledefaults()
+
+ defaults.extend(configFileDefaults)
+ self.defaults = defaultargs(defaults)
+
+ self.configDir = ConfigDir('gui')
+ self.configDir.setDefaults(defaults,defaultsToIgnore)
+ if self.configDir.checkConfig():
+ self.config = self.configDir.loadConfig()
+ else:
+ self.config = self.configDir.getConfig()
+ self.importOldGUIConfig()
+ self.configDir.saveConfig()
+
+ updated = False # make all config default changes here
+
+ if self.config['gui_ratesettingsdefault'] not in ratesettingslist:
+ self.config['gui_ratesettingsdefault'] = (
+ self.defaults['gui_ratesettingsdefault'] )
+ updated = True
+ if self.config['ipv6_enabled'] and (
+ sys.version_info < (2,3) or not socket.has_ipv6 ):
+ self.config['ipv6_enabled'] = 0
+ updated = True
+ for c in ['gui_checkingcolor','gui_downloadcolor','gui_seedingcolor']:
+ try:
+ HexToColor(self.config[c])
+ except:
+ self.config[c] = self.defaults[c]
+ updated = True
+
+ if updated:
+ self.configDir.saveConfig()
+
+ self.configDir.deleteOldCacheData(self.config['expire_cache_data'])
+
+
+ def importOldGUIConfig(self):
+ oldconfig = wxConfig("BitTorrent",style=wxCONFIG_USE_LOCAL_FILE)
+ cont, s, i = oldconfig.GetFirstEntry()
+ if not cont:
+ oldconfig.DeleteAll()
+ return False
+ while cont: # import old config data
+ if self.config.has_key(s):
+ t = oldconfig.GetEntryType(s)
+ try:
+ if t == 1:
+ assert type(self.config[s]) == type('')
+ self.config[s] = oldconfig.Read(s)
+ elif t == 2 or t == 3:
+ assert type(self.config[s]) == type(1)
+ self.config[s] = int(oldconfig.ReadInt(s))
+ elif t == 4:
+ assert type(self.config[s]) == type(1.0)
+ self.config[s] = oldconfig.ReadFloat(s)
+ except:
+ pass
+ cont, s, i = oldconfig.GetNextEntry(i)
+
+# oldconfig.DeleteAll()
+ return True
+
+
+ def resetConfigDefaults(self):
+ for p,v in self.defaults.items():
+ if not p in defaultsToIgnore:
+ self.config[p] = v
+ self.configDir.saveConfig()
+
+ def writeConfigFile(self):
+ self.configDir.saveConfig()
+
+ def WriteLastSaved(self, l):
+ self.config['last_saved'] = l
+ self.configDir.saveConfig()
+
+
+ def getcheckingcolor(self):
+ return HexToColor(self.config['gui_checkingcolor'])
+ def getdownloadcolor(self):
+ return HexToColor(self.config['gui_downloadcolor'])
+ def getseedingcolor(self):
+ return HexToColor(self.config['gui_seedingcolor'])
+
+ def configReset(self):
+ r = self._configReset
+ self._configReset = False
+ return r
+
+ def getConfigDir(self):
+ return self.configDir
+
+ def getIconDir(self):
+ return self.configDir.getIconDir()
+
+ def getTorrentData(self,t):
+ return self.configDir.getTorrentData(t)
+
+ def setColorIcon(self, xxicon, xxiconptr, xxcolor):
+ idata = wxMemoryDC()
+ idata.SelectObject(xxicon)
+ idata.SetBrush(wxBrush(xxcolor,wxSOLID))
+ idata.DrawRectangle(0,0,16,16)
+ idata.SelectObject(wxNullBitmap)
+ xxiconptr.Refresh()
+
+
+ def getColorFromUser(self, parent, colInit):
+ data = wxColourData()
+ if colInit.Ok():
+ data.SetColour(colInit)
+ data.SetCustomColour(0, self.checkingcolor)
+ data.SetCustomColour(1, self.downloadcolor)
+ data.SetCustomColour(2, self.seedingcolor)
+ dlg = wxColourDialog(parent,data)
+ if not dlg.ShowModal():
+ return colInit
+ return dlg.GetColourData().GetColour()
+
+
+ def configMenu(self, parent):
+ self.parent = parent
+ try:
+ self.FONT = self.config['gui_font']
+ self.default_font = wxFont(self.FONT, wxDEFAULT, wxNORMAL, wxNORMAL, False)
+ self.checkingcolor = HexToColor(self.config['gui_checkingcolor'])
+ self.downloadcolor = HexToColor(self.config['gui_downloadcolor'])
+ self.seedingcolor = HexToColor(self.config['gui_seedingcolor'])
+
+ if (self.configMenuBox is not None):
+ try:
+ self.configMenuBox.Close()
+ except wxPyDeadObjectError, e:
+ self.configMenuBox = None
+
+ self.configMenuBox = wxFrame(None, -1, 'BitTorrent Preferences', size = (1,1),
+ style = wxDEFAULT_FRAME_STYLE|wxFULL_REPAINT_ON_RESIZE)
+ if (sys.platform == 'win32'):
+ self.icon = self.parent.icon
+ self.configMenuBox.SetIcon(self.icon)
+
+ panel = wxPanel(self.configMenuBox, -1)
+ self.panel = panel
+
+ def StaticText(text, font = self.FONT, underline = False, color = None, panel = panel):
+ x = wxStaticText(panel, -1, text, style = wxALIGN_LEFT)
+ x.SetFont(wxFont(font, wxDEFAULT, wxNORMAL, wxNORMAL, underline))
+ if color is not None:
+ x.SetForegroundColour(color)
+ return x
+
+ colsizer = wxFlexGridSizer(cols = 1, vgap = 8)
+
+ self.gui_stretchwindow_checkbox = wxCheckBox(panel, -1, "Stretch window to fit torrent name *")
+ self.gui_stretchwindow_checkbox.SetFont(self.default_font)
+ self.gui_stretchwindow_checkbox.SetValue(self.config['gui_stretchwindow'])
+
+ self.gui_displaystats_checkbox = wxCheckBox(panel, -1, "Display peer and seed statistics")
+ self.gui_displaystats_checkbox.SetFont(self.default_font)
+ self.gui_displaystats_checkbox.SetValue(self.config['gui_displaystats'])
+
+ self.gui_displaymiscstats_checkbox = wxCheckBox(panel, -1, "Display miscellaneous other statistics")
+ self.gui_displaymiscstats_checkbox.SetFont(self.default_font)
+ self.gui_displaymiscstats_checkbox.SetValue(self.config['gui_displaymiscstats'])
+
+ self.security_checkbox = wxCheckBox(panel, -1, "Don't allow multiple connections from the same IP")
+ self.security_checkbox.SetFont(self.default_font)
+ self.security_checkbox.SetValue(self.config['security'])
+
+ self.autokick_checkbox = wxCheckBox(panel, -1, "Kick/ban clients that send you bad data *")
+ self.autokick_checkbox.SetFont(self.default_font)
+ self.autokick_checkbox.SetValue(self.config['auto_kick'])
+
+ self.buffering_checkbox = wxCheckBox(panel, -1, "Enable read/write buffering *")
+ self.buffering_checkbox.SetFont(self.default_font)
+ self.buffering_checkbox.SetValue(self.config['buffer_reads'])
+
+ self.breakup_checkbox = wxCheckBox(panel, -1, "Break-up seed bitfield to foil ISP manipulation")
+ self.breakup_checkbox.SetFont(self.default_font)
+ self.breakup_checkbox.SetValue(self.config['breakup_seed_bitfield'])
+
+ self.autoflush_checkbox = wxCheckBox(panel, -1, "Flush data to disk every 5 minutes")
+ self.autoflush_checkbox.SetFont(self.default_font)
+ self.autoflush_checkbox.SetValue(self.config['auto_flush'])
+
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ self.ipv6enabled_checkbox = wxCheckBox(panel, -1, "Initiate and receive connections via IPv6 *")
+ self.ipv6enabled_checkbox.SetFont(self.default_font)
+ self.ipv6enabled_checkbox.SetValue(self.config['ipv6_enabled'])
+
+ self.gui_forcegreenonfirewall_checkbox = wxCheckBox(panel, -1,
+ "Force icon to display green when firewalled")
+ self.gui_forcegreenonfirewall_checkbox.SetFont(self.default_font)
+ self.gui_forcegreenonfirewall_checkbox.SetValue(self.config['gui_forcegreenonfirewall'])
+
+
+ self.minport_data = wxSpinCtrl(panel, -1, '', (-1,-1), (self.FONT*8, -1))
+ self.minport_data.SetFont(self.default_font)
+ self.minport_data.SetRange(1,65535)
+ self.minport_data.SetValue(self.config['minport'])
+
+ self.maxport_data = wxSpinCtrl(panel, -1, '', (-1,-1), (self.FONT*8, -1))
+ self.maxport_data.SetFont(self.default_font)
+ self.maxport_data.SetRange(1,65535)
+ self.maxport_data.SetValue(self.config['maxport'])
+
+ self.randomport_checkbox = wxCheckBox(panel, -1, "randomize")
+ self.randomport_checkbox.SetFont(self.default_font)
+ self.randomport_checkbox.SetValue(self.config['random_port'])
+
+ self.gui_font_data = wxSpinCtrl(panel, -1, '', (-1,-1), (self.FONT*5, -1))
+ self.gui_font_data.SetFont(self.default_font)
+ self.gui_font_data.SetRange(8,16)
+ self.gui_font_data.SetValue(self.config['gui_font'])
+
+ self.gui_ratesettingsdefault_data=wxChoice(panel, -1, choices = ratesettingslist)
+ self.gui_ratesettingsdefault_data.SetFont(self.default_font)
+ self.gui_ratesettingsdefault_data.SetStringSelection(self.config['gui_ratesettingsdefault'])
+
+ self.maxdownload_data = wxSpinCtrl(panel, -1, '', (-1,-1), (self.FONT*7, -1))
+ self.maxdownload_data.SetFont(self.default_font)
+ self.maxdownload_data.SetRange(0,5000)
+ self.maxdownload_data.SetValue(self.config['max_download_rate'])
+
+ self.gui_ratesettingsmode_data=wxRadioBox(panel, -1, 'Rate Settings Mode',
+ choices = [ 'none', 'basic', 'full' ] )
+ self.gui_ratesettingsmode_data.SetFont(self.default_font)
+ self.gui_ratesettingsmode_data.SetStringSelection(self.config['gui_ratesettingsmode'])
+
+ if (sys.platform == 'win32'):
+ self.win32_taskbar_icon_checkbox = wxCheckBox(panel, -1, "Minimize to system tray")
+ self.win32_taskbar_icon_checkbox.SetFont(self.default_font)
+ self.win32_taskbar_icon_checkbox.SetValue(self.config['win32_taskbar_icon'])
+
+# self.upnp_checkbox = wxCheckBox(panel, -1, "Enable automatic UPnP port forwarding")
+# self.upnp_checkbox.SetFont(self.default_font)
+# self.upnp_checkbox.SetValue(self.config['upnp_nat_access'])
+ self.upnp_data=wxChoice(panel, -1,
+ choices = ['disabled', 'type 1 (fast)', 'type 2 (slow)'])
+ self.upnp_data.SetFont(self.default_font)
+ self.upnp_data.SetSelection(self.config['upnp_nat_access'])
+
+ self.gui_default_savedir_ctrl = wxTextCtrl(parent = panel, id = -1,
+ value = self.config['gui_default_savedir'],
+ size = (26*self.FONT, -1), style = wxTE_PROCESS_TAB)
+ self.gui_default_savedir_ctrl.SetFont(self.default_font)
+
+ self.gui_savemode_data=wxRadioBox(panel, -1, 'Ask where to save: *',
+ choices = [ 'always', 'never', 'auto-resume' ] )
+ self.gui_savemode_data.SetFont(self.default_font)
+ self.gui_savemode_data.SetSelection(1-self.config['gui_saveas_ask'])
+
+ self.checkingcolor_icon = wxEmptyBitmap(16,16)
+ self.checkingcolor_iconptr = wxStaticBitmap(panel, -1, self.checkingcolor_icon)
+ self.setColorIcon(self.checkingcolor_icon, self.checkingcolor_iconptr, self.checkingcolor)
+
+ self.downloadcolor_icon = wxEmptyBitmap(16,16)
+ self.downloadcolor_iconptr = wxStaticBitmap(panel, -1, self.downloadcolor_icon)
+ self.setColorIcon(self.downloadcolor_icon, self.downloadcolor_iconptr, self.downloadcolor)
+
+ self.seedingcolor_icon = wxEmptyBitmap(16,16)
+ self.seedingcolor_iconptr = wxStaticBitmap(panel, -1, self.seedingcolor_icon)
+ self.setColorIcon(self.seedingcolor_icon, self.downloadcolor_iconptr, self.seedingcolor)
+
+ rowsizer = wxFlexGridSizer(cols = 2, hgap = 20)
+
+ block12sizer = wxFlexGridSizer(cols = 1, vgap = 7)
+
+ block1sizer = wxFlexGridSizer(cols = 1, vgap = 2)
+ if (sys.platform == 'win32'):
+ block1sizer.Add(self.win32_taskbar_icon_checkbox)
+# block1sizer.Add(self.upnp_checkbox)
+ block1sizer.Add(self.gui_stretchwindow_checkbox)
+ block1sizer.Add(self.gui_displaystats_checkbox)
+ block1sizer.Add(self.gui_displaymiscstats_checkbox)
+ block1sizer.Add(self.security_checkbox)
+ block1sizer.Add(self.autokick_checkbox)
+ block1sizer.Add(self.buffering_checkbox)
+ block1sizer.Add(self.breakup_checkbox)
+ block1sizer.Add(self.autoflush_checkbox)
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ block1sizer.Add(self.ipv6enabled_checkbox)
+ block1sizer.Add(self.gui_forcegreenonfirewall_checkbox)
+
+ block12sizer.Add(block1sizer)
+
+ colorsizer = wxStaticBoxSizer(wxStaticBox(panel, -1, "Gauge Colors:"), wxVERTICAL)
+ colorsizer1 = wxFlexGridSizer(cols = 7)
+ colorsizer1.Add(StaticText(' Checking: '), 1, wxALIGN_BOTTOM)
+ colorsizer1.Add(self.checkingcolor_iconptr, 1, wxALIGN_BOTTOM)
+ colorsizer1.Add(StaticText(' Downloading: '), 1, wxALIGN_BOTTOM)
+ colorsizer1.Add(self.downloadcolor_iconptr, 1, wxALIGN_BOTTOM)
+ colorsizer1.Add(StaticText(' Seeding: '), 1, wxALIGN_BOTTOM)
+ colorsizer1.Add(self.seedingcolor_iconptr, 1, wxALIGN_BOTTOM)
+ colorsizer1.Add(StaticText(' '))
+ minsize = self.checkingcolor_iconptr.GetBestSize()
+ minsize.SetHeight(minsize.GetHeight()+5)
+ colorsizer1.SetMinSize(minsize)
+ colorsizer.Add(colorsizer1)
+
+ block12sizer.Add(colorsizer, 1, wxALIGN_LEFT)
+
+ rowsizer.Add(block12sizer)
+
+ block3sizer = wxFlexGridSizer(cols = 1)
+
+ portsettingsSizer = wxStaticBoxSizer(wxStaticBox(panel, -1, "Port Range:*"), wxVERTICAL)
+ portsettingsSizer1 = wxGridSizer(cols = 2, vgap = 1)
+ portsettingsSizer1.Add(StaticText('From: '), 1, wxALIGN_CENTER_VERTICAL|wxALIGN_RIGHT)
+ portsettingsSizer1.Add(self.minport_data, 1, wxALIGN_BOTTOM)
+ portsettingsSizer1.Add(StaticText('To: '), 1, wxALIGN_CENTER_VERTICAL|wxALIGN_RIGHT)
+ portsettingsSizer1.Add(self.maxport_data, 1, wxALIGN_BOTTOM)
+ portsettingsSizer.Add(portsettingsSizer1)
+ portsettingsSizer.Add(self.randomport_checkbox, 1, wxALIGN_CENTER)
+ block3sizer.Add(portsettingsSizer, 1, wxALIGN_CENTER)
+ block3sizer.Add(StaticText(' '))
+ block3sizer.Add(self.gui_ratesettingsmode_data, 1, wxALIGN_CENTER)
+ block3sizer.Add(StaticText(' '))
+ ratesettingsSizer = wxFlexGridSizer(cols = 1, vgap = 2)
+ ratesettingsSizer.Add(StaticText('Default Rate Setting: *'), 1, wxALIGN_CENTER)
+ ratesettingsSizer.Add(self.gui_ratesettingsdefault_data, 1, wxALIGN_CENTER)
+ block3sizer.Add(ratesettingsSizer, 1, wxALIGN_CENTER)
+ if (sys.platform == 'win32'):
+ block3sizer.Add(StaticText(' '))
+ upnpSizer = wxFlexGridSizer(cols = 1, vgap = 2)
+ upnpSizer.Add(StaticText('UPnP Port Forwarding: *'), 1, wxALIGN_CENTER)
+ upnpSizer.Add(self.upnp_data, 1, wxALIGN_CENTER)
+ block3sizer.Add(upnpSizer, 1, wxALIGN_CENTER)
+
+ rowsizer.Add(block3sizer)
+ colsizer.Add(rowsizer)
+
+ block4sizer = wxFlexGridSizer(cols = 3, hgap = 15)
+ savepathsizer = wxFlexGridSizer(cols = 2, vgap = 1)
+ savepathsizer.Add(StaticText('Default Save Path: *'))
+ savepathsizer.Add(StaticText(' '))
+ savepathsizer.Add(self.gui_default_savedir_ctrl, 1, wxEXPAND)
+ savepathButton = wxButton(panel, -1, '...', size = (18,18))
+# savepathButton.SetFont(self.default_font)
+ savepathsizer.Add(savepathButton, 0, wxALIGN_CENTER)
+ savepathsizer.Add(self.gui_savemode_data, 0, wxALIGN_CENTER)
+ block4sizer.Add(savepathsizer, -1, wxALIGN_BOTTOM)
+
+ fontsizer = wxFlexGridSizer(cols = 1, vgap = 2)
+ fontsizer.Add(StaticText(''))
+ fontsizer.Add(StaticText('Font: *'), 1, wxALIGN_CENTER)
+ fontsizer.Add(self.gui_font_data, 1, wxALIGN_CENTER)
+ block4sizer.Add(fontsizer, 1, wxALIGN_CENTER_VERTICAL)
+
+ dratesettingsSizer = wxFlexGridSizer(cols = 1, vgap = 2)
+ dratesettingsSizer.Add(StaticText('Default Max'), 1, wxALIGN_CENTER)
+ dratesettingsSizer.Add(StaticText('Download Rate'), 1, wxALIGN_CENTER)
+ dratesettingsSizer.Add(StaticText('(kB/s): *'), 1, wxALIGN_CENTER)
+ dratesettingsSizer.Add(self.maxdownload_data, 1, wxALIGN_CENTER)
+ dratesettingsSizer.Add(StaticText('(0 = disabled)'), 1, wxALIGN_CENTER)
+
+ block4sizer.Add(dratesettingsSizer, 1, wxALIGN_CENTER_VERTICAL)
+
+ colsizer.Add(block4sizer, 0, wxALIGN_CENTER)
+# colsizer.Add(StaticText(' '))
+
+ savesizer = wxGridSizer(cols = 4, hgap = 10)
+ saveButton = wxButton(panel, -1, 'Save')
+# saveButton.SetFont(self.default_font)
+ savesizer.Add(saveButton, 0, wxALIGN_CENTER)
+
+ cancelButton = wxButton(panel, -1, 'Cancel')
+# cancelButton.SetFont(self.default_font)
+ savesizer.Add(cancelButton, 0, wxALIGN_CENTER)
+
+ defaultsButton = wxButton(panel, -1, 'Revert to Defaults')
+# defaultsButton.SetFont(self.default_font)
+ savesizer.Add(defaultsButton, 0, wxALIGN_CENTER)
+
+ advancedButton = wxButton(panel, -1, 'Advanced...')
+# advancedButton.SetFont(self.default_font)
+ savesizer.Add(advancedButton, 0, wxALIGN_CENTER)
+ colsizer.Add(savesizer, 1, wxALIGN_CENTER)
+
+ resizewarningtext=StaticText('* These settings will not take effect until the next time you start BitTorrent', self.FONT-2)
+ colsizer.Add(resizewarningtext, 1, wxALIGN_CENTER)
+
+ border = wxBoxSizer(wxHORIZONTAL)
+ border.Add(colsizer, 1, wxEXPAND | wxALL, 4)
+
+ panel.SetSizer(border)
+ panel.SetAutoLayout(True)
+
+ self.advancedConfig = {}
+
+ def setDefaults(evt, self = self):
+ try:
+ self.minport_data.SetValue(self.defaults['minport'])
+ self.maxport_data.SetValue(self.defaults['maxport'])
+ self.randomport_checkbox.SetValue(self.defaults['random_port'])
+ self.gui_stretchwindow_checkbox.SetValue(self.defaults['gui_stretchwindow'])
+ self.gui_displaystats_checkbox.SetValue(self.defaults['gui_displaystats'])
+ self.gui_displaymiscstats_checkbox.SetValue(self.defaults['gui_displaymiscstats'])
+ self.security_checkbox.SetValue(self.defaults['security'])
+ self.autokick_checkbox.SetValue(self.defaults['auto_kick'])
+ self.buffering_checkbox.SetValue(self.defaults['buffer_reads'])
+ self.breakup_checkbox.SetValue(self.defaults['breakup_seed_bitfield'])
+ self.autoflush_checkbox.SetValue(self.defaults['auto_flush'])
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ self.ipv6enabled_checkbox.SetValue(self.defaults['ipv6_enabled'])
+ self.gui_forcegreenonfirewall_checkbox.SetValue(self.defaults['gui_forcegreenonfirewall'])
+ self.gui_font_data.SetValue(self.defaults['gui_font'])
+ self.gui_ratesettingsdefault_data.SetStringSelection(self.defaults['gui_ratesettingsdefault'])
+ self.maxdownload_data.SetValue(self.defaults['max_download_rate'])
+ self.gui_ratesettingsmode_data.SetStringSelection(self.defaults['gui_ratesettingsmode'])
+ self.gui_default_savedir_ctrl.SetValue(self.defaults['gui_default_savedir'])
+ self.gui_savemode_data.SetSelection(1-self.defaults['gui_saveas_ask'])
+
+ self.checkingcolor = HexToColor(self.defaults['gui_checkingcolor'])
+ self.setColorIcon(self.checkingcolor_icon, self.checkingcolor_iconptr, self.checkingcolor)
+ self.downloadcolor = HexToColor(self.defaults['gui_downloadcolor'])
+ self.setColorIcon(self.downloadcolor_icon, self.downloadcolor_iconptr, self.downloadcolor)
+ self.seedingcolor = HexToColor(self.defaults['gui_seedingcolor'])
+ self.setColorIcon(self.seedingcolor_icon, self.seedingcolor_iconptr, self.seedingcolor)
+
+ if (sys.platform == 'win32'):
+ self.win32_taskbar_icon_checkbox.SetValue(self.defaults['win32_taskbar_icon'])
+# self.upnp_checkbox.SetValue(self.defaults['upnp_nat_access'])
+ self.upnp_data.SetSelection(self.defaults['upnp_nat_access'])
+
+ # reset advanced too
+ self.advancedConfig = {}
+ for key in ['ip', 'bind', 'min_peers', 'max_initiate', 'display_interval',
+ 'alloc_type', 'alloc_rate', 'max_files_open', 'max_connections', 'super_seeder',
+ 'ipv6_binds_v4', 'double_check', 'triple_check', 'lock_files', 'lock_while_reading',
+ 'expire_cache_data']:
+ self.advancedConfig[key] = self.defaults[key]
+ self.CloseAdvanced()
+ except:
+ self.parent.exception()
+
+
+ def saveConfigs(evt, self = self):
+ try:
+ self.config['gui_stretchwindow']=int(self.gui_stretchwindow_checkbox.GetValue())
+ self.config['gui_displaystats']=int(self.gui_displaystats_checkbox.GetValue())
+ self.config['gui_displaymiscstats']=int(self.gui_displaymiscstats_checkbox.GetValue())
+ self.config['security']=int(self.security_checkbox.GetValue())
+ self.config['auto_kick']=int(self.autokick_checkbox.GetValue())
+ buffering=int(self.buffering_checkbox.GetValue())
+ self.config['buffer_reads']=buffering
+ if buffering:
+ self.config['write_buffer_size']=self.defaults['write_buffer_size']
+ else:
+ self.config['write_buffer_size']=0
+ self.config['breakup_seed_bitfield']=int(self.breakup_checkbox.GetValue())
+ if self.autoflush_checkbox.GetValue():
+ self.config['auto_flush']=5
+ else:
+ self.config['auto_flush']=0
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ self.config['ipv6_enabled']=int(self.ipv6enabled_checkbox.GetValue())
+ self.config['gui_forcegreenonfirewall']=int(self.gui_forcegreenonfirewall_checkbox.GetValue())
+ self.config['minport']=self.minport_data.GetValue()
+ self.config['maxport']=self.maxport_data.GetValue()
+ self.config['random_port']=int(self.randomport_checkbox.GetValue())
+ self.config['gui_font']=self.gui_font_data.GetValue()
+ self.config['gui_ratesettingsdefault']=self.gui_ratesettingsdefault_data.GetStringSelection()
+ self.config['max_download_rate']=self.maxdownload_data.GetValue()
+ self.config['gui_ratesettingsmode']=self.gui_ratesettingsmode_data.GetStringSelection()
+ self.config['gui_default_savedir']=self.gui_default_savedir_ctrl.GetValue()
+ self.config['gui_saveas_ask']=1-self.gui_savemode_data.GetSelection()
+ self.config['gui_checkingcolor']=ColorToHex(self.checkingcolor)
+ self.config['gui_downloadcolor']=ColorToHex(self.downloadcolor)
+ self.config['gui_seedingcolor']=ColorToHex(self.seedingcolor)
+
+ if (sys.platform == 'win32'):
+ self.config['win32_taskbar_icon']=int(self.win32_taskbar_icon_checkbox.GetValue())
+# self.config['upnp_nat_access']=int(self.upnp_checkbox.GetValue())
+ self.config['upnp_nat_access']=self.upnp_data.GetSelection()
+
+ if self.advancedConfig:
+ for key,val in self.advancedConfig.items():
+ self.config[key] = val
+
+ self.writeConfigFile()
+ self._configReset = True
+ self.Close()
+ except:
+ self.parent.exception()
+
+ def cancelConfigs(evt, self = self):
+ self.Close()
+
+ def savepath_set(evt, self = self):
+ try:
+ d = self.gui_default_savedir_ctrl.GetValue()
+ if d == '':
+ d = self.config['last_saved']
+ dl = wxDirDialog(self.panel, 'Choose a default directory to save to',
+ d, style = wxDD_DEFAULT_STYLE | wxDD_NEW_DIR_BUTTON)
+ if dl.ShowModal() == wxID_OK:
+ self.gui_default_savedir_ctrl.SetValue(dl.GetPath())
+ except:
+ self.parent.exception()
+
+ def checkingcoloricon_set(evt, self = self):
+ try:
+ newcolor = self.getColorFromUser(self.panel,self.checkingcolor)
+ self.setColorIcon(self.checkingcolor_icon, self.checkingcolor_iconptr, newcolor)
+ self.checkingcolor = newcolor
+ except:
+ self.parent.exception()
+
+ def downloadcoloricon_set(evt, self = self):
+ try:
+ newcolor = self.getColorFromUser(self.panel,self.downloadcolor)
+ self.setColorIcon(self.downloadcolor_icon, self.downloadcolor_iconptr, newcolor)
+ self.downloadcolor = newcolor
+ except:
+ self.parent.exception()
+
+ def seedingcoloricon_set(evt, self = self):
+ try:
+ newcolor = self.getColorFromUser(self.panel,self.seedingcolor)
+ self.setColorIcon(self.seedingcolor_icon, self.seedingcolor_iconptr, newcolor)
+ self.seedingcolor = newcolor
+ except:
+ self.parent.exception()
+
+ EVT_BUTTON(self.configMenuBox, saveButton.GetId(), saveConfigs)
+ EVT_BUTTON(self.configMenuBox, cancelButton.GetId(), cancelConfigs)
+ EVT_BUTTON(self.configMenuBox, defaultsButton.GetId(), setDefaults)
+ EVT_BUTTON(self.configMenuBox, advancedButton.GetId(), self.advancedMenu)
+ EVT_BUTTON(self.configMenuBox, savepathButton.GetId(), savepath_set)
+ EVT_LEFT_DOWN(self.checkingcolor_iconptr, checkingcoloricon_set)
+ EVT_LEFT_DOWN(self.downloadcolor_iconptr, downloadcoloricon_set)
+ EVT_LEFT_DOWN(self.seedingcolor_iconptr, seedingcoloricon_set)
+
+ self.configMenuBox.Show ()
+ border.Fit(panel)
+ self.configMenuBox.Fit()
+ except:
+ self.parent.exception()
+
+
+ def Close(self):
+ self.CloseAdvanced()
+ if self.configMenuBox is not None:
+ try:
+ self.configMenuBox.Close ()
+ except wxPyDeadObjectError, e:
+ pass
+ self.configMenuBox = None
+
+ def advancedMenu(self, event = None):
+ try:
+ if not self.advancedConfig:
+ for key in ['ip', 'bind', 'min_peers', 'max_initiate', 'display_interval',
+ 'alloc_type', 'alloc_rate', 'max_files_open', 'max_connections', 'super_seeder',
+ 'ipv6_binds_v4', 'double_check', 'triple_check', 'lock_files', 'lock_while_reading',
+ 'expire_cache_data']:
+ self.advancedConfig[key] = self.config[key]
+
+ if (self.advancedMenuBox is not None):
+ try:
+ self.advancedMenuBox.Close ()
+ except wxPyDeadObjectError, e:
+ self.advancedMenuBox = None
+
+ self.advancedMenuBox = wxFrame(None, -1, 'BitTorrent Advanced Preferences', size = (1,1),
+ style = wxDEFAULT_FRAME_STYLE|wxFULL_REPAINT_ON_RESIZE)
+ if (sys.platform == 'win32'):
+ self.advancedMenuBox.SetIcon(self.icon)
+
+ panel = wxPanel(self.advancedMenuBox, -1)
+# self.panel = panel
+
+ def StaticText(text, font = self.FONT, underline = False, color = None, panel = panel):
+ x = wxStaticText(panel, -1, text, style = wxALIGN_LEFT)
+ x.SetFont(wxFont(font, wxDEFAULT, wxNORMAL, wxNORMAL, underline))
+ if color is not None:
+ x.SetForegroundColour(color)
+ return x
+
+ colsizer = wxFlexGridSizer(cols = 1, hgap = 13, vgap = 13)
+ warningtext = StaticText('CHANGE THESE SETTINGS AT YOUR OWN RISK', self.FONT+4, True, 'Red')
+ colsizer.Add(warningtext, 1, wxALIGN_CENTER)
+
+ self.ip_data = wxTextCtrl(parent = panel, id = -1,
+ value = self.advancedConfig['ip'],
+ size = (self.FONT*13, int(self.FONT*2.2)), style = wxTE_PROCESS_TAB)
+ self.ip_data.SetFont(self.default_font)
+
+ self.bind_data = wxTextCtrl(parent = panel, id = -1,
+ value = self.advancedConfig['bind'],
+ size = (self.FONT*13, int(self.FONT*2.2)), style = wxTE_PROCESS_TAB)
+ self.bind_data.SetFont(self.default_font)
+
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ self.ipv6bindsv4_data=wxChoice(panel, -1,
+ choices = ['separate sockets', 'single socket'])
+ self.ipv6bindsv4_data.SetFont(self.default_font)
+ self.ipv6bindsv4_data.SetSelection(self.advancedConfig['ipv6_binds_v4'])
+
+ self.minpeers_data = wxSpinCtrl(panel, -1, '', (-1,-1), (self.FONT*7, -1))
+ self.minpeers_data.SetFont(self.default_font)
+ self.minpeers_data.SetRange(10,100)
+ self.minpeers_data.SetValue(self.advancedConfig['min_peers'])
+ # max_initiate = 2*minpeers
+
+ self.displayinterval_data = wxSpinCtrl(panel, -1, '', (-1,-1), (self.FONT*7, -1))
+ self.displayinterval_data.SetFont(self.default_font)
+ self.displayinterval_data.SetRange(100,2000)
+ self.displayinterval_data.SetValue(int(self.advancedConfig['display_interval']*1000))
+
+ self.alloctype_data=wxChoice(panel, -1,
+ choices = ['normal', 'background', 'pre-allocate', 'sparse'])
+ self.alloctype_data.SetFont(self.default_font)
+ self.alloctype_data.SetStringSelection(self.advancedConfig['alloc_type'])
+
+ self.allocrate_data = wxSpinCtrl(panel, -1, '', (-1,-1), (self.FONT*7,-1))
+ self.allocrate_data.SetFont(self.default_font)
+ self.allocrate_data.SetRange(1,100)
+ self.allocrate_data.SetValue(int(self.advancedConfig['alloc_rate']))
+
+ self.locking_data=wxChoice(panel, -1,
+ choices = ['no locking', 'lock while writing', 'lock always'])
+ self.locking_data.SetFont(self.default_font)
+ if self.advancedConfig['lock_files']:
+ if self.advancedConfig['lock_while_reading']:
+ self.locking_data.SetSelection(2)
+ else:
+ self.locking_data.SetSelection(1)
+ else:
+ self.locking_data.SetSelection(0)
+
+ self.doublecheck_data=wxChoice(panel, -1,
+ choices = ['no extra checking', 'double-check', 'triple-check'])
+ self.doublecheck_data.SetFont(self.default_font)
+ if self.advancedConfig['double_check']:
+ if self.advancedConfig['triple_check']:
+ self.doublecheck_data.SetSelection(2)
+ else:
+ self.doublecheck_data.SetSelection(1)
+ else:
+ self.doublecheck_data.SetSelection(0)
+
+ self.maxfilesopen_choices = ['50', '100', '200', 'no limit ']
+ self.maxfilesopen_data=wxChoice(panel, -1, choices = self.maxfilesopen_choices)
+ self.maxfilesopen_data.SetFont(self.default_font)
+ setval = self.advancedConfig['max_files_open']
+ if setval == 0:
+ setval = 'no limit '
+ else:
+ setval = str(setval)
+ if not setval in self.maxfilesopen_choices:
+ setval = self.maxfilesopen_choices[0]
+ self.maxfilesopen_data.SetStringSelection(setval)
+
+ self.maxconnections_choices = ['no limit ', '20', '30', '40', '50', '60', '100', '200']
+ self.maxconnections_data=wxChoice(panel, -1, choices = self.maxconnections_choices)
+ self.maxconnections_data.SetFont(self.default_font)
+ setval = self.advancedConfig['max_connections']
+ if setval == 0:
+ setval = 'no limit '
+ else:
+ setval = str(setval)
+ if not setval in self.maxconnections_choices:
+ setval = self.maxconnections_choices[0]
+ self.maxconnections_data.SetStringSelection(setval)
+
+ self.superseeder_data=wxChoice(panel, -1,
+ choices = ['normal', 'super-seed'])
+ self.superseeder_data.SetFont(self.default_font)
+ self.superseeder_data.SetSelection(self.advancedConfig['super_seeder'])
+
+ self.expirecache_choices = ['never ', '3', '5', '7', '10', '15', '30', '60', '90']
+ self.expirecache_data=wxChoice(panel, -1, choices = self.expirecache_choices)
+ setval = self.advancedConfig['expire_cache_data']
+ if setval == 0:
+ setval = 'never '
+ else:
+ setval = str(setval)
+ if not setval in self.expirecache_choices:
+ setval = self.expirecache_choices[0]
+ self.expirecache_data.SetFont(self.default_font)
+ self.expirecache_data.SetStringSelection(setval)
+
+
+ twocolsizer = wxFlexGridSizer(cols = 2, hgap = 20)
+ datasizer = wxFlexGridSizer(cols = 2, vgap = 2)
+ datasizer.Add(StaticText('Local IP: '), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.ip_data)
+ datasizer.Add(StaticText('IP to bind to: '), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.bind_data)
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ datasizer.Add(StaticText('IPv6 socket handling: '), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.ipv6bindsv4_data)
+ datasizer.Add(StaticText('Minimum number of peers: '), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.minpeers_data)
+ datasizer.Add(StaticText('Display interval (ms): '), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.displayinterval_data)
+ datasizer.Add(StaticText('Disk allocation type:'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.alloctype_data)
+ datasizer.Add(StaticText('Allocation rate (MiB/s):'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.allocrate_data)
+ datasizer.Add(StaticText('File locking:'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.locking_data)
+ datasizer.Add(StaticText('Extra data checking:'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.doublecheck_data)
+ datasizer.Add(StaticText('Max files open:'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.maxfilesopen_data)
+ datasizer.Add(StaticText('Max peer connections:'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.maxconnections_data)
+ datasizer.Add(StaticText('Default seeding mode:'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.superseeder_data)
+ datasizer.Add(StaticText('Expire resume data(days):'), 1, wxALIGN_CENTER_VERTICAL)
+ datasizer.Add(self.expirecache_data)
+
+ twocolsizer.Add(datasizer)
+
+ infosizer = wxFlexGridSizer(cols = 1)
+ self.hinttext = StaticText('', self.FONT, False, 'Blue')
+ infosizer.Add(self.hinttext, 1, wxALIGN_LEFT|wxALIGN_CENTER_VERTICAL)
+ infosizer.SetMinSize((180,100))
+ twocolsizer.Add(infosizer, 1, wxEXPAND)
+
+ colsizer.Add(twocolsizer)
+
+ savesizer = wxGridSizer(cols = 3, hgap = 20)
+ okButton = wxButton(panel, -1, 'OK')
+# okButton.SetFont(self.default_font)
+ savesizer.Add(okButton, 0, wxALIGN_CENTER)
+
+ cancelButton = wxButton(panel, -1, 'Cancel')
+# cancelButton.SetFont(self.default_font)
+ savesizer.Add(cancelButton, 0, wxALIGN_CENTER)
+
+ defaultsButton = wxButton(panel, -1, 'Revert to Defaults')
+# defaultsButton.SetFont(self.default_font)
+ savesizer.Add(defaultsButton, 0, wxALIGN_CENTER)
+ colsizer.Add(savesizer, 1, wxALIGN_CENTER)
+
+ resizewarningtext=StaticText('None of these settings will take effect until the next time you start BitTorrent', self.FONT-2)
+ colsizer.Add(resizewarningtext, 1, wxALIGN_CENTER)
+
+ border = wxBoxSizer(wxHORIZONTAL)
+ border.Add(colsizer, 1, wxEXPAND | wxALL, 4)
+
+ panel.SetSizer(border)
+ panel.SetAutoLayout(True)
+
+ def setDefaults(evt, self = self):
+ try:
+ self.ip_data.SetValue(self.defaults['ip'])
+ self.bind_data.SetValue(self.defaults['bind'])
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ self.ipv6bindsv4_data.SetSelection(self.defaults['ipv6_binds_v4'])
+ self.minpeers_data.SetValue(self.defaults['min_peers'])
+ self.displayinterval_data.SetValue(int(self.defaults['display_interval']*1000))
+ self.alloctype_data.SetStringSelection(self.defaults['alloc_type'])
+ self.allocrate_data.SetValue(int(self.defaults['alloc_rate']))
+ if self.defaults['lock_files']:
+ if self.defaults['lock_while_reading']:
+ self.locking_data.SetSelection(2)
+ else:
+ self.locking_data.SetSelection(1)
+ else:
+ self.locking_data.SetSelection(0)
+ if self.defaults['double_check']:
+ if self.defaults['triple_check']:
+ self.doublecheck_data.SetSelection(2)
+ else:
+ self.doublecheck_data.SetSelection(1)
+ else:
+ self.doublecheck_data.SetSelection(0)
+ setval = self.defaults['max_files_open']
+ if setval == 0:
+ setval = 'no limit '
+ else:
+ setval = str(setval)
+ if not setval in self.maxfilesopen_choices:
+ setval = self.maxfilesopen_choices[0]
+ self.maxfilesopen_data.SetStringSelection(setval)
+ setval = self.defaults['max_connections']
+ if setval == 0:
+ setval = 'no limit '
+ else:
+ setval = str(setval)
+ if not setval in self.maxconnections_choices:
+ setval = self.maxconnections_choices[0]
+ self.maxconnections_data.SetStringSelection(setval)
+ self.superseeder_data.SetSelection(int(self.defaults['super_seeder']))
+ setval = self.defaults['expire_cache_data']
+ if setval == 0:
+ setval = 'never '
+ else:
+ setval = str(setval)
+ if not setval in self.expirecache_choices:
+ setval = self.expirecache_choices[0]
+ self.expirecache_data.SetStringSelection(setval)
+ except:
+ self.parent.exception()
+
+ def saveConfigs(evt, self = self):
+ try:
+ self.advancedConfig['ip'] = self.ip_data.GetValue()
+ self.advancedConfig['bind'] = self.bind_data.GetValue()
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ self.advancedConfig['ipv6_binds_v4'] = self.ipv6bindsv4_data.GetSelection()
+ self.advancedConfig['min_peers'] = self.minpeers_data.GetValue()
+ self.advancedConfig['display_interval'] = float(self.displayinterval_data.GetValue())/1000
+ self.advancedConfig['alloc_type'] = self.alloctype_data.GetStringSelection()
+ self.advancedConfig['alloc_rate'] = float(self.allocrate_data.GetValue())
+ self.advancedConfig['lock_files'] = int(self.locking_data.GetSelection() >= 1)
+ self.advancedConfig['lock_while_reading'] = int(self.locking_data.GetSelection() > 1)
+ self.advancedConfig['double_check'] = int(self.doublecheck_data.GetSelection() >= 1)
+ self.advancedConfig['triple_check'] = int(self.doublecheck_data.GetSelection() > 1)
+ try:
+ self.advancedConfig['max_files_open'] = int(self.maxfilesopen_data.GetStringSelection())
+ except: # if it ain't a number, it must be "no limit"
+ self.advancedConfig['max_files_open'] = 0
+ try:
+ self.advancedConfig['max_connections'] = int(self.maxconnections_data.GetStringSelection())
+ self.advancedConfig['max_initiate'] = min(
+ 2*self.advancedConfig['min_peers'], self.advancedConfig['max_connections'])
+ except: # if it ain't a number, it must be "no limit"
+ self.advancedConfig['max_connections'] = 0
+ self.advancedConfig['max_initiate'] = 2*self.advancedConfig['min_peers']
+ self.advancedConfig['super_seeder']=int(self.superseeder_data.GetSelection())
+ try:
+ self.advancedConfig['expire_cache_data'] = int(self.expirecache_data.GetStringSelection())
+ except:
+ self.advancedConfig['expire_cache_data'] = 0
+ self.advancedMenuBox.Close()
+ except:
+ self.parent.exception()
+
+ def cancelConfigs(evt, self = self):
+ self.advancedMenuBox.Close()
+
+ def ip_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nThe IP reported to the tracker.\n' +
+ 'unless the tracker is on the\n' +
+ 'same intranet as this client,\n' +
+ 'the tracker will autodetect the\n' +
+ "client's IP and ignore this\n" +
+ "value.")
+
+ def bind_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nThe IP the client will bind to.\n' +
+ 'Only useful if your machine is\n' +
+ 'directly handling multiple IPs.\n' +
+ "If you don't know what this is,\n" +
+ "leave it blank.")
+
+ def ipv6bindsv4_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nCertain operating systems will\n' +
+ 'open IPv4 protocol connections on\n' +
+ 'an IPv6 socket; others require you\n' +
+ "to open two sockets on the same\n" +
+ "port, one IPv4 and one IPv6.")
+
+ def minpeers_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nThe minimum number of peers the\n' +
+ 'client tries to stay connected\n' +
+ 'with. Do not set this higher\n' +
+ 'unless you have a very fast\n' +
+ "connection and a lot of system\n" +
+ "resources.")
+
+ def displayinterval_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nHow often to update the\n' +
+ 'graphical display, in 1/1000s\n' +
+ 'of a second. Setting this too low\n' +
+ "will strain your computer's\n" +
+ "processor and video access.")
+
+ def alloctype_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\nHow to allocate disk space.\n' +
+ 'normal allocates space as data is\n' +
+ 'received, background also adds\n' +
+ "space in the background, pre-\n" +
+ "allocate reserves up front, and\n" +
+ 'sparse is only for filesystems\n' +
+ 'that support it by default.')
+
+ def allocrate_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nAt what rate to allocate disk\n' +
+ 'space when allocating in the\n' +
+ 'background. Set this too high on a\n' +
+ "slow filesystem and your download\n" +
+ "will slow to a crawl.")
+
+ def locking_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\n\nFile locking prevents other\n' +
+ 'programs (including other instances\n' +
+ 'of BitTorrent) from accessing files\n' +
+ "you are downloading.")
+
+ def doublecheck_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nHow much extra checking to do\n' +
+ 'making sure no data is corrupted.\n' +
+ 'Double-check mode uses more CPU,\n' +
+ "while triple-check mode increases\n" +
+ "disk accesses.")
+
+ def maxfilesopen_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\n\nThe maximum number of files to\n' +
+ 'keep open at the same time. Zero\n' +
+ 'means no limit. Please note that\n' +
+ "if this option is in effect,\n" +
+ "files are not guaranteed to be\n" +
+ "locked.")
+
+ def maxconnections_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\nSome operating systems, most\n' +
+ 'notably Windows 9x/ME combined\n' +
+ 'with certain network drivers,\n' +
+ "cannot handle more than a certain\n" +
+ "number of open ports. If the\n" +
+ "client freezes, try setting this\n" +
+ "to 60 or below.")
+
+ def superseeder_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\nThe "super-seed" method allows\n' +
+ 'a single source to more efficiently\n' +
+ 'seed a large torrent, but is not\n' +
+ "necessary in a well-seeded torrent,\n" +
+ "and causes problems with statistics.\n" +
+ "Unless you routinely seed torrents\n" +
+ "you can enable this by selecting\n" +
+ '"SUPER-SEED" for connection type.\n' +
+ '(once enabled it does not turn off.)')
+
+ def expirecache_hint(evt, self = self):
+ self.hinttext.SetLabel('\n\nThe client stores temporary data\n' +
+ 'in order to handle downloading only\n' +
+ 'specific files from the torrent and\n' +
+ "so it can resume downloads more\n" +
+ "quickly. This sets how long the\n" +
+ "client will keep this data before\n" +
+ "deleting it to free disk space.")
+
+ EVT_BUTTON(self.advancedMenuBox, okButton.GetId(), saveConfigs)
+ EVT_BUTTON(self.advancedMenuBox, cancelButton.GetId(), cancelConfigs)
+ EVT_BUTTON(self.advancedMenuBox, defaultsButton.GetId(), setDefaults)
+ EVT_ENTER_WINDOW(self.ip_data, ip_hint)
+ EVT_ENTER_WINDOW(self.bind_data, bind_hint)
+ if sys.version_info >= (2,3) and socket.has_ipv6:
+ EVT_ENTER_WINDOW(self.ipv6bindsv4_data, ipv6bindsv4_hint)
+ EVT_ENTER_WINDOW(self.minpeers_data, minpeers_hint)
+ EVT_ENTER_WINDOW(self.displayinterval_data, displayinterval_hint)
+ EVT_ENTER_WINDOW(self.alloctype_data, alloctype_hint)
+ EVT_ENTER_WINDOW(self.allocrate_data, allocrate_hint)
+ EVT_ENTER_WINDOW(self.locking_data, locking_hint)
+ EVT_ENTER_WINDOW(self.doublecheck_data, doublecheck_hint)
+ EVT_ENTER_WINDOW(self.maxfilesopen_data, maxfilesopen_hint)
+ EVT_ENTER_WINDOW(self.maxconnections_data, maxconnections_hint)
+ EVT_ENTER_WINDOW(self.superseeder_data, superseeder_hint)
+ EVT_ENTER_WINDOW(self.expirecache_data, expirecache_hint)
+
+ self.advancedMenuBox.Show ()
+ border.Fit(panel)
+ self.advancedMenuBox.Fit()
+ except:
+ self.parent.exception()
+
+
+ def CloseAdvanced(self):
+ if self.advancedMenuBox is not None:
+ try:
+ self.advancedMenuBox.Close()
+ except wxPyDeadObjectError, e:
+ self.advancedMenuBox = None
+
--- /dev/null
+connChoices=(
+ {'name':'automatic',
+ 'rate':{'min':0, 'max':5000, 'def': 0},
+ 'conn':{'min':0, 'max':100, 'def': 0},
+ 'automatic':1},
+ {'name':'unlimited',
+ 'rate':{'min':0, 'max':5000, 'def': 0, 'div': 50},
+ 'conn':{'min':4, 'max':100, 'def': 4}},
+ {'name':'dialup/isdn',
+ 'rate':{'min':3, 'max': 8, 'def': 5},
+ 'conn':{'min':2, 'max': 3, 'def': 2},
+ 'initiate': 12},
+ {'name':'dsl/cable slow',
+ 'rate':{'min':10, 'max': 48, 'def': 13},
+ 'conn':{'min':4, 'max': 20, 'def': 4}},
+ {'name':'dsl/cable fast',
+ 'rate':{'min':20, 'max': 100, 'def': 40},
+ 'conn':{'min':4, 'max': 30, 'def': 6}},
+ {'name':'T1',
+ 'rate':{'min':100, 'max': 300, 'def':150},
+ 'conn':{'min':4, 'max': 40, 'def':10}},
+ {'name':'T3+',
+ 'rate':{'min':400, 'max':2000, 'def':500},
+ 'conn':{'min':4, 'max':100, 'def':20}},
+ {'name':'seeder',
+ 'rate':{'min':0, 'max':5000, 'def':0, 'div': 50},
+ 'conn':{'min':1, 'max':100, 'def':1}},
+ {'name':'SUPER-SEED', 'super-seed':1}
+ )
+
+connChoiceList = map(lambda x:x['name'], connChoices)
--- /dev/null
+# Generated from bt_MakeCreateIcons - 05/10/04 22:15:33
+# T-0.3.0 (BitTornado)
+
+from binascii import a2b_base64
+from zlib import decompress
+from os.path import join
+
+icons = {
+ "icon_bt.ico":
+ "eJyt1K+OFEEQx/FaQTh5GDRZhSQpiUHwCrxCBYXFrjyJLXeXEARPsZqUPMm+" +
+ "AlmP+PGtngoLDji69zMz2zt/qqtr1mxHv7621d4+MnvK/jl66Bl2drV+e7Wz" +
+ "S/v12A7rY4fDtuvOwfF4tOPXo52/fLLz+WwpWd6nqRXHKXux39sTrtnjNd7g" +
+ "PW7wGSd860f880kffjvJ2QYS1Zcw4AjcoaA5yRFIFDQXOgKJguZmjkCioB4T" +
+ "Y2CqxpTXA7sHEgVNEC8RSBQ0gfk7xtknCupgk3EEEgXlNgFHIFHQTMoRSBQ0" +
+ "E+1ouicKmsk7AomCJiGOQKKgSZIjkChoEucIJAqaZDoCiYImwb4iydULmqQ7" +
+ "AomC1kLcEQ/jSBQ0i+MIJAqaBXMEElVdi9siOgKJgmZhfWWlVjTddXW/FtsR" +
+ "SBQ0BeAIJAqaonAEEgVNoTgCiYKmeByBREHaqiVWRtSRrAJzBBIFTdE5AomC" +
+ "phBPpxPP57dVkDfrTl063nUVnWe383fZx9tb3uN+o7U+BLDtuvcQm8d/27Y/" +
+ "jO3o5/ay+YPv/+f6y30e1OyB7QcsGWFj",
+ "icon_done.ico":
+ "eJyt1K2OVEEQhuEaQbJyMWgyCklSEoPgFvYWKigsduRKbLndhCC4itGk5Erm" +
+ "Fsh4xMdbfSoMOGDpnuf89Jyf6uqaMdvRr69ttbdPzJ6xf4Eeeo6dXa3vXu/s" +
+ "0n49tsP62OGw7bpzcDwe7fj1aOcvn+x8PltKlg9pasVxyl7u9/aUe/Z4gxu8" +
+ "xy0+44Rv/Yp/vujDbxc520Ci+hYGHIF7FDQXOQKJguZGRyBR0DzMEUgU1GNi" +
+ "DEzVmPJ6YfdAoqAJ4hUCiYImMH/HOPtEQR1sMo5AoqDcJuAIJAqaSTkCiYJm" +
+ "oh1N90RBM3lHIFHQJMQRSBQ0SXIEEgVN4hyBREGTTEcgUdAk2FckuXpBk3RH" +
+ "IFHQWoh74mEciYJmcRyBREGzYI5AoqprcVtERyBR0Cysr6zUiqa7rh7WYjsC" +
+ "iYKmAByBREFTFI5AoqApFEcgUdAUjyOQKEhbtcTKiDqSVWCOQKKgKTpHIFHQ" +
+ "FOLpdOL9fLcK8nY9qUvHu66i8+x2/i77eHfH77h/0VofAth23Xuoz/+2bX8Y" +
+ "29HP7WXzB+f/5/7Lcx7V7JHtB9dPG3I=",
+ "black.ico":
+ "eJzt1zsOgkAYReFLLCztjJ2UlpLY485kOS7DpbgESwqTcQZDghjxZwAfyfl0" +
+ "LIieGzUWSom/pan840rHnbSUtPHHX9Je9+tAh2ybNe8TZZ/vk8ajJ4zl6JVJ" +
+ "+xFx+0R03Djx1/2B8bcT9L/bt0+4Wq+4se8e/VTfMvGqb4n3nYiIGz+lvt9s" +
+ "9EpE2T4xJN4xNFYWU6t+JWXuXDFzTom7SodSyi/S+iwtwjlJ80KaNY/C34rW" +
+ "aT8nvK5uhF7ohn7Yqfb87kffLAAAAAAAAAAAAAAAAAAAGMUNy7dADg==",
+ "blue.ico":
+ "eJzt10EOwUAYhuGv6cLSTux06QD2dTM9jmM4iiNYdiEZ81cIFTWddtDkfbQW" +
+ "De8XogtS5h9FIf+81H4jLSSt/ekvaavrdaCDez4SZV+PpPHoicBy9ErSfkQ8" +
+ "fCI6Hjgx6f7A+McJ+r/t95i46xMP7bf8Uz9o4k0/XMT338voP5shK0MkjXcM" +
+ "YSqam6Qunatyf7Nk7iztaqk8SaujNLfzIM0qKX88ZX8rWmf7Nfa+W8N61rW+" +
+ "7TR7fverHxYAAAAAAAAAAAAAAAAAAIziApVZ444=",
+ "green.ico":
+ "eJzt1zEOgjAAheFHGBzdjJuMHsAdbybxNB7Do3gERwaT2mJIBCOWlqok/yc4" +
+ "EP1fNDIoZfZRFLLPa5120krS1p72kvZ6XAeGHLtHouzrkTQePOFZDl5J2g+I" +
+ "+08Exz0nZt2PjH+coP/bvveEaY2L+/VN13/1PSbe9v0FfP+jTP6ziVmJkTQ+" +
+ "MISZaO6SujSmyu3dkpmbdKil8iptLtLSnWdpUUn58yn3t6J39l/j3tc2XM91" +
+ "Xd/tNHt296sfFgAAAAAAAAAAAAAAAAAATOIOVLEoDg==",
+ "red.ico":
+ "eJzt10EOwUAYhuGv6cLSTux06QD2dTOO4xiO4giWXUjG/BVCRTuddtDkfbQW" +
+ "De8XogtS5h9FIf+81GEjLSSt/ekvaavbdaCVez0SZd+PpPHoicBy9ErSfkQ8" +
+ "fCI6Hjgx6f7AeOcE/d/2QyceesaD+g1/1u+e+NwPF/H99zL6z2bIyhBJ4y1D" +
+ "mIb6LqlK5/a5v1syd5F2lVSepdVJmtt5lGZ7KX8+ZX8rGmfzNfa+e8N61rW+" +
+ "7dR7fverHxYAAAAAAAAAAAAAAAAAAIziCpgs444=",
+ "white.ico":
+ "eJzt1zsOgkAYReFLKCztjJ2ULsAed6bLcRnuwYTaJVhSmIwzGBLEiD8D+EjO" +
+ "p2NB9NyosVBK/C3L5B+XOmykhaS1P/6StrpfBzoUp6J5nyj7fJ80Hj1hLEev" +
+ "TNqPiNsnouPGib/uD4y/naD/3b59wtV6xY199+in+paJV31LvO9ERNz4KfX9" +
+ "ZqNXIsr2iSHxjqGxspha9Sspc+f2qXNK3FXalVJ+kVZnaR7OUZrtpbR5FP5W" +
+ "tE77OeF1dSP0Qjf0w06153c/+mYBAAAAAAAAAAAAAAAAAMAobj//I7s=",
+ "yellow.ico":
+ "eJzt1zsOgkAYReFLKCztjJ2ULsAedybLcRkuxSVYUpiM82M0ihGHgVFJzidY" +
+ "ED03vgqlzN+KQv5+qf1GWkha+9Nf0lbX60AX556ORNnXI2k8eiKwHL2StB8R" +
+ "D5+IjgdOTLo/MP5xgv5v+8ETd/3iYf2W/+oHTLzth4t4/3sZ/WszZGWIpPGO" +
+ "IUxE8yupS+eq3H9smTtLu1oqT9LqKM3tPEizSsofT9nfitbZfow979awnnWt" +
+ "bzvNnt/96osFAAAAAAAAAAAAAAAAAACjuABhjmIs",
+ "black1.ico":
+ "eJzt0zEOgkAUANEhFpZSGTstTWzkVt5Cj8ZROAIHMNGPWBCFDYgxMZkHn2Iz" +
+ "G5YCyOLKc+K54XSANbCPiSV2tOt/qjgW3XtSnN41FH/Qv29Jx/P7qefp7W8P" +
+ "4z85HQ+9JRG/7BpTft31DPUKyiVcFjEZzQ/TTtdzrWnKmCr6evv780qSJEmS" +
+ "JEmSJEmSJEmSpPnunVFDcA==",
+ "green1.ico":
+ "eJzt0zEKwkAQRuEXLCyTSuy0DHgxb6F4shzFI+QAgpkkFoombowIwvt2Z4vh" +
+ "X5gtFrJYRUGca/Y7WAFlVLTY0vf/1elxTwqP3xoKf5B/vjIenp+fOs+r/LWT" +
+ "/uQ34aGpUqQnv+1ygDqHagnHRVRG+2H6unfrtZkq6hz5evP7eSVJkiRJkiRJ" +
+ "kiRJkiRJ0nwNoWQ+AA==",
+ "yellow1.ico":
+ "eJzt0zEKwkAQRuEXLCxNJXZaCl7MW8Sj5SgeIQcQ4oS1UDTJxkhAeN/ubDH8" +
+ "C7PFQhGrLIlzx/kEW+AYFS0OpP6/atuXPSk8fKsv/EX+/cpweH5+6jyf8kn+" +
+ "k0fCfVPlyE/+2q2CZgP1Gi6rqILuw6R69uh1mTrqGvlmv/y8kiRJkiRJkiRJ" +
+ "kiRJkiRpvjsp9L8k",
+ "alloc.gif":
+ "eJxz93SzsEw0YRBh+M4ABi0MS3ue///P8H8UjIIRBhR/sjAyMDAx6IAyAihP" +
+ "MHAcYWDlkPHYsOBgM4ewVsyJDQsPNzEoebF8CHjo0smjH3dmRsDjI33C7Dw3" +
+ "MiYuOtjNyDShRSNwyemJguJJKhaGS32nGka61Vg2NJyYKRd+bY+nwtMzjbqV" +
+ "Qh84gxMCJgnlL4vJuqJyaa5NfFLNLsNVV2a7syacfVWkHd4bv7RN1ltM7ejm" +
+ "tMtNZ19Oyb02p8C3aqr3dr2GbXl/7fZyOej5rW653WZ7MzzHZV+v7O2/EZM+" +
+ "Pt45kbX6ScWHNWfOilo3n5thucXv8org1XF3DRQYrAEWiVY3"
+}
+
+def GetIcons():
+ return icons.keys()
+
+def CreateIcon(icon, savedir):
+ try:
+ f = open(join(savedir,icon),"wb")
+ f.write(decompress(a2b_base64(icons[icon])))
+ success = 1
+ except:
+ success = 0
+ try:
+ f.close()
+ except:
+ pass
+ return success
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+from clock import clock
+
+class Measure:
+ def __init__(self, max_rate_period, fudge = 1):
+ self.max_rate_period = max_rate_period
+ self.ratesince = clock() - fudge
+ self.last = self.ratesince
+ self.rate = 0.0
+ self.total = 0l
+
+ def update_rate(self, amount):
+ self.total += amount
+ t = clock()
+ self.rate = (self.rate * (self.last - self.ratesince) +
+ amount) / (t - self.ratesince + 0.0001)
+ self.last = t
+ if self.ratesince < t - self.max_rate_period:
+ self.ratesince = t - self.max_rate_period
+
+ def get_rate(self):
+ self.update_rate(0)
+ return self.rate
+
+ def get_rate_noupdate(self):
+ return self.rate
+
+ def time_until_rate(self, newrate):
+ if self.rate <= newrate:
+ return 0
+ t = clock() - self.ratesince
+ return ((self.rate * t) / newrate) - t
+
+ def get_total(self):
+ return self.total
\ No newline at end of file
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+from cStringIO import StringIO
+from sys import stdout
+import time
+from clock import clock
+from gzip import GzipFile
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+DEBUG = False
+
+weekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
+
+months = [None, 'Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
+ 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
+
+class HTTPConnection:
+ def __init__(self, handler, connection):
+ self.handler = handler
+ self.connection = connection
+ self.buf = ''
+ self.closed = False
+ self.done = False
+ self.donereading = False
+ self.next_func = self.read_type
+
+ def get_ip(self):
+ return self.connection.get_ip()
+
+ def data_came_in(self, data):
+ if self.donereading or self.next_func is None:
+ return True
+ self.buf += data
+ while True:
+ try:
+ i = self.buf.index('\n')
+ except ValueError:
+ return True
+ val = self.buf[:i]
+ self.buf = self.buf[i+1:]
+ self.next_func = self.next_func(val)
+ if self.donereading:
+ return True
+ if self.next_func is None or self.closed:
+ return False
+
+ def read_type(self, data):
+ self.header = data.strip()
+ words = data.split()
+ if len(words) == 3:
+ self.command, self.path, garbage = words
+ self.pre1 = False
+ elif len(words) == 2:
+ self.command, self.path = words
+ self.pre1 = True
+ if self.command != 'GET':
+ return None
+ else:
+ return None
+ if self.command not in ('HEAD', 'GET'):
+ return None
+ self.headers = {}
+ return self.read_header
+
+ def read_header(self, data):
+ data = data.strip()
+ if data == '':
+ self.donereading = True
+ if self.headers.get('accept-encoding','').find('gzip') > -1:
+ self.encoding = 'gzip'
+ else:
+ self.encoding = 'identity'
+ r = self.handler.getfunc(self, self.path, self.headers)
+ if r is not None:
+ self.answer(r)
+ return None
+ try:
+ i = data.index(':')
+ except ValueError:
+ return None
+ self.headers[data[:i].strip().lower()] = data[i+1:].strip()
+ if DEBUG:
+ print data[:i].strip() + ": " + data[i+1:].strip()
+ return self.read_header
+
+ def answer(self, (responsecode, responsestring, headers, data)):
+ if self.closed:
+ return
+ if self.encoding == 'gzip':
+ compressed = StringIO()
+ gz = GzipFile(fileobj = compressed, mode = 'wb', compresslevel = 9)
+ gz.write(data)
+ gz.close()
+ cdata = compressed.getvalue()
+ if len(cdata) >= len(data):
+ self.encoding = 'identity'
+ else:
+ if DEBUG:
+ print "Compressed: %i Uncompressed: %i\n" % (len(cdata),len(data))
+ data = cdata
+ headers['Content-Encoding'] = 'gzip'
+
+ # i'm abusing the identd field here, but this should be ok
+ if self.encoding == 'identity':
+ ident = '-'
+ else:
+ ident = self.encoding
+ self.handler.log( self.connection.get_ip(), ident, '-',
+ self.header, responsecode, len(data),
+ self.headers.get('referer','-'),
+ self.headers.get('user-agent','-') )
+ self.done = True
+ r = StringIO()
+ r.write('HTTP/1.0 ' + str(responsecode) + ' ' +
+ responsestring + '\r\n')
+ if not self.pre1:
+ headers['Content-Length'] = len(data)
+ for key, value in headers.items():
+ r.write(key + ': ' + str(value) + '\r\n')
+ r.write('\r\n')
+ if self.command != 'HEAD':
+ r.write(data)
+ self.connection.write(r.getvalue())
+ if self.connection.is_flushed():
+ self.connection.shutdown(1)
+
+class HTTPHandler:
+ def __init__(self, getfunc, minflush):
+ self.connections = {}
+ self.getfunc = getfunc
+ self.minflush = minflush
+ self.lastflush = clock()
+
+ def external_connection_made(self, connection):
+ self.connections[connection] = HTTPConnection(self, connection)
+
+ def connection_flushed(self, connection):
+ if self.connections[connection].done:
+ connection.shutdown(1)
+
+ def connection_lost(self, connection):
+ ec = self.connections[connection]
+ ec.closed = True
+ del ec.connection
+ del ec.next_func
+ del self.connections[connection]
+
+ def data_came_in(self, connection, data):
+ c = self.connections[connection]
+ if not c.data_came_in(data) and not c.closed:
+ c.connection.shutdown(1)
+
+ def log(self, ip, ident, username, header,
+ responsecode, length, referrer, useragent):
+ year, month, day, hour, minute, second, a, b, c = time.localtime(time.time())
+ print '%s %s %s [%02d/%3s/%04d:%02d:%02d:%02d] "%s" %i %i "%s" "%s"' % (
+ ip, ident, username, day, months[month], year, hour,
+ minute, second, header, responsecode, length, referrer, useragent)
+ t = clock()
+ if t - self.lastflush > self.minflush:
+ self.lastflush = t
+ stdout.flush()
--- /dev/null
+# edit this file to enable/disable Psyco
+# psyco = 1 -- enabled
+# psyco = 0 -- disabled
+
+psyco = 0
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+from traceback import print_exc
+from binascii import b2a_hex
+from clock import clock
+from CurrentRateMeasure import Measure
+from cStringIO import StringIO
+from math import sqrt
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+try:
+ sum([1])
+except:
+ sum = lambda a: reduce(lambda x,y: x+y, a, 0)
+
+DEBUG = False
+
+MAX_RATE_PERIOD = 20.0
+MAX_RATE = 10e10
+PING_BOUNDARY = 1.2
+PING_SAMPLES = 7
+PING_DISCARDS = 1
+PING_THRESHHOLD = 5
+PING_DELAY = 5 # cycles 'til first upward adjustment
+PING_DELAY_NEXT = 2 # 'til next
+ADJUST_UP = 1.05
+ADJUST_DOWN = 0.95
+UP_DELAY_FIRST = 5
+UP_DELAY_NEXT = 2
+SLOTS_STARTING = 6
+SLOTS_FACTOR = 1.66/1000
+
+class RateLimiter:
+ def __init__(self, sched, unitsize, slotsfunc = lambda x: None):
+ self.sched = sched
+ self.last = None
+ self.unitsize = unitsize
+ self.slotsfunc = slotsfunc
+ self.measure = Measure(MAX_RATE_PERIOD)
+ self.autoadjust = False
+ self.upload_rate = MAX_RATE * 1000
+ self.slots = SLOTS_STARTING # garbage if not automatic
+
+ def set_upload_rate(self, rate):
+ # rate = -1 # test automatic
+ if rate < 0:
+ if self.autoadjust:
+ return
+ self.autoadjust = True
+ self.autoadjustup = 0
+ self.pings = []
+ rate = MAX_RATE
+ self.slots = SLOTS_STARTING
+ self.slotsfunc(self.slots)
+ else:
+ self.autoadjust = False
+ if not rate:
+ rate = MAX_RATE
+ self.upload_rate = rate * 1000
+ self.lasttime = clock()
+ self.bytes_sent = 0
+
+ def queue(self, conn):
+ assert conn.next_upload is None
+ if self.last is None:
+ self.last = conn
+ conn.next_upload = conn
+ self.try_send(True)
+ else:
+ conn.next_upload = self.last.next_upload
+ self.last.next_upload = conn
+ self.last = conn
+
+ def try_send(self, check_time = False):
+ t = clock()
+ self.bytes_sent -= (t - self.lasttime) * self.upload_rate
+ self.lasttime = t
+ if check_time:
+ self.bytes_sent = max(self.bytes_sent, 0)
+ cur = self.last.next_upload
+ while self.bytes_sent <= 0:
+ bytes = cur.send_partial(self.unitsize)
+ self.bytes_sent += bytes
+ self.measure.update_rate(bytes)
+ if bytes == 0 or cur.backlogged():
+ if self.last is cur:
+ self.last = None
+ cur.next_upload = None
+ break
+ else:
+ self.last.next_upload = cur.next_upload
+ cur.next_upload = None
+ cur = self.last.next_upload
+ else:
+ self.last = cur
+ cur = cur.next_upload
+ else:
+ self.sched(self.try_send, self.bytes_sent / self.upload_rate)
+
+ def adjust_sent(self, bytes):
+ self.bytes_sent = min(self.bytes_sent+bytes, self.upload_rate*3)
+ self.measure.update_rate(bytes)
+
+
+ def ping(self, delay):
+ if DEBUG:
+ print delay
+ if not self.autoadjust:
+ return
+ self.pings.append(delay > PING_BOUNDARY)
+ if len(self.pings) < PING_SAMPLES+PING_DISCARDS:
+ return
+ if DEBUG:
+ print 'cycle'
+ pings = sum(self.pings[PING_DISCARDS:])
+ del self.pings[:]
+ if pings >= PING_THRESHHOLD: # assume flooded
+ if self.upload_rate == MAX_RATE:
+ self.upload_rate = self.measure.get_rate()*ADJUST_DOWN
+ else:
+ self.upload_rate = min(self.upload_rate,
+ self.measure.get_rate()*1.1)
+ self.upload_rate = max(int(self.upload_rate*ADJUST_DOWN),2)
+ self.slots = int(sqrt(self.upload_rate*SLOTS_FACTOR))
+ self.slotsfunc(self.slots)
+ if DEBUG:
+ print 'adjust down to '+str(self.upload_rate)
+ self.lasttime = clock()
+ self.bytes_sent = 0
+ self.autoadjustup = UP_DELAY_FIRST
+ else: # not flooded
+ if self.upload_rate == MAX_RATE:
+ return
+ self.autoadjustup -= 1
+ if self.autoadjustup:
+ return
+ self.upload_rate = int(self.upload_rate*ADJUST_UP)
+ self.slots = int(sqrt(self.upload_rate*SLOTS_FACTOR))
+ self.slotsfunc(self.slots)
+ if DEBUG:
+ print 'adjust up to '+str(self.upload_rate)
+ self.lasttime = clock()
+ self.bytes_sent = 0
+ self.autoadjustup = UP_DELAY_NEXT
+
+
+
+
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+from clock import clock
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+FACTOR = 0.999
+
+class RateMeasure:
+ def __init__(self):
+ self.last = None
+ self.time = 1.0
+ self.got = 0.0
+ self.remaining = None
+ self.broke = False
+ self.got_anything = False
+ self.last_checked = None
+ self.rate = 0
+ self.lastten = False
+
+ def data_came_in(self, amount):
+ if not self.got_anything:
+ self.got_anything = True
+ self.last = clock()
+ return
+ self.update(amount)
+
+ def data_rejected(self, amount):
+ pass
+
+ def get_time_left(self, left):
+ t = clock()
+ if not self.got_anything:
+ return None
+ if t - self.last > 15:
+ self.update(0)
+ try:
+ remaining = left/self.rate
+ if not self.lastten and remaining <= 10:
+ self.lastten = True
+ if self.lastten:
+ return remaining
+ delta = max(remaining/20,2)
+ if self.remaining is None:
+ self.remaining = remaining
+ elif abs(self.remaining-remaining) > delta:
+ self.remaining = remaining
+ else:
+ self.remaining -= t - self.last_checked
+ except ZeroDivisionError:
+ self.remaining = None
+ if self.remaining is not None and self.remaining < 0.1:
+ self.remaining = 0.1
+ self.last_checked = t
+ return self.remaining
+
+ def update(self, amount):
+ t = clock()
+ t1 = int(t)
+ l1 = int(self.last)
+ for i in xrange(l1,t1):
+ self.time *= FACTOR
+ self.got *= FACTOR
+ self.got += amount
+ if t - self.last < 20:
+ self.time += t - self.last
+ self.last = t
+ try:
+ self.rate = self.got / self.time
+ except ZeroDivisionError:
+ pass
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+from bisect import insort
+from SocketHandler import SocketHandler, UPnP_ERROR
+import socket
+from cStringIO import StringIO
+from traceback import print_exc
+from select import error
+from threading import Thread, Event
+from time import sleep
+from clock import clock
+import sys
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+
+def autodetect_ipv6():
+ try:
+ assert sys.version_info >= (2,3)
+ assert socket.has_ipv6
+ socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
+ except:
+ return 0
+ return 1
+
+def autodetect_socket_style():
+ if sys.platform.find('linux') < 0:
+ return 1
+ else:
+ try:
+ f = open('/proc/sys/net/ipv6/bindv6only','r')
+ dual_socket_style = int(f.read())
+ f.close()
+ return int(not dual_socket_style)
+ except:
+ return 0
+
+
+READSIZE = 100000
+
+class RawServer:
+ def __init__(self, doneflag, timeout_check_interval, timeout, noisy = True,
+ ipv6_enable = True, failfunc = lambda x: None, errorfunc = None,
+ sockethandler = None, excflag = Event()):
+ self.timeout_check_interval = timeout_check_interval
+ self.timeout = timeout
+ self.servers = {}
+ self.single_sockets = {}
+ self.dead_from_write = []
+ self.doneflag = doneflag
+ self.noisy = noisy
+ self.failfunc = failfunc
+ self.errorfunc = errorfunc
+ self.exccount = 0
+ self.funcs = []
+ self.externally_added = []
+ self.finished = Event()
+ self.tasks_to_kill = []
+ self.excflag = excflag
+
+ if sockethandler is None:
+ sockethandler = SocketHandler(timeout, ipv6_enable, READSIZE)
+ self.sockethandler = sockethandler
+ self.add_task(self.scan_for_timeouts, timeout_check_interval)
+
+ def get_exception_flag(self):
+ return self.excflag
+
+ def _add_task(self, func, delay, id = None):
+ assert float(delay) >= 0
+ insort(self.funcs, (clock() + delay, func, id))
+
+ def add_task(self, func, delay = 0, id = None):
+ assert float(delay) >= 0
+ self.externally_added.append((func, delay, id))
+
+ def scan_for_timeouts(self):
+ self.add_task(self.scan_for_timeouts, self.timeout_check_interval)
+ self.sockethandler.scan_for_timeouts()
+
+ def bind(self, port, bind = '', reuse = False,
+ ipv6_socket_style = 1, upnp = False):
+ self.sockethandler.bind(port, bind, reuse, ipv6_socket_style, upnp)
+
+ def find_and_bind(self, minport, maxport, bind = '', reuse = False,
+ ipv6_socket_style = 1, upnp = 0, randomizer = False):
+ return self.sockethandler.find_and_bind(minport, maxport, bind, reuse,
+ ipv6_socket_style, upnp, randomizer)
+
+ def start_connection_raw(self, dns, socktype, handler = None):
+ return self.sockethandler.start_connection_raw(dns, socktype, handler)
+
+ def start_connection(self, dns, handler = None, randomize = False):
+ return self.sockethandler.start_connection(dns, handler, randomize)
+
+ def get_stats(self):
+ return self.sockethandler.get_stats()
+
+ def pop_external(self):
+ while self.externally_added:
+ (a, b, c) = self.externally_added.pop(0)
+ self._add_task(a, b, c)
+
+
+ def listen_forever(self, handler):
+ self.sockethandler.set_handler(handler)
+ try:
+ while not self.doneflag.isSet():
+ try:
+ self.pop_external()
+ self._kill_tasks()
+ if self.funcs:
+ period = self.funcs[0][0] + 0.001 - clock()
+ else:
+ period = 2 ** 30
+ if period < 0:
+ period = 0
+ events = self.sockethandler.do_poll(period)
+ if self.doneflag.isSet():
+ return
+ while self.funcs and self.funcs[0][0] <= clock():
+ garbage1, func, id = self.funcs.pop(0)
+ if id in self.tasks_to_kill:
+ pass
+ try:
+# print func.func_name
+ func()
+ except (SystemError, MemoryError), e:
+ self.failfunc(str(e))
+ return
+ except KeyboardInterrupt:
+# self.exception(True)
+ return
+ except:
+ if self.noisy:
+ self.exception()
+ self.sockethandler.close_dead()
+ self.sockethandler.handle_events(events)
+ if self.doneflag.isSet():
+ return
+ self.sockethandler.close_dead()
+ except (SystemError, MemoryError), e:
+ self.failfunc(str(e))
+ return
+ except error:
+ if self.doneflag.isSet():
+ return
+ except KeyboardInterrupt:
+# self.exception(True)
+ return
+ except:
+ self.exception()
+ if self.exccount > 10:
+ return
+ finally:
+# self.sockethandler.shutdown()
+ self.finished.set()
+
+ def is_finished(self):
+ return self.finished.isSet()
+
+ def wait_until_finished(self):
+ self.finished.wait()
+
+ def _kill_tasks(self):
+ if self.tasks_to_kill:
+ new_funcs = []
+ for (t, func, id) in self.funcs:
+ if id not in self.tasks_to_kill:
+ new_funcs.append((t, func, id))
+ self.funcs = new_funcs
+ self.tasks_to_kill = []
+
+ def kill_tasks(self, id):
+ self.tasks_to_kill.append(id)
+
+ def exception(self, kbint = False):
+ if not kbint:
+ self.excflag.set()
+ self.exccount += 1
+ if self.errorfunc is None:
+ print_exc()
+ else:
+ data = StringIO()
+ print_exc(file = data)
+# print data.getvalue() # report exception here too
+ if not kbint: # don't report here if it's a keyboard interrupt
+ self.errorfunc(data.getvalue())
+
+ def shutdown(self):
+ self.sockethandler.shutdown()
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from cStringIO import StringIO
+#from RawServer import RawServer
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+from BT1.Encrypter import protocol_name
+
+default_task_id = []
+
+class SingleRawServer:
+ def __init__(self, info_hash, multihandler, doneflag, protocol):
+ self.info_hash = info_hash
+ self.doneflag = doneflag
+ self.protocol = protocol
+ self.multihandler = multihandler
+ self.rawserver = multihandler.rawserver
+ self.finished = False
+ self.running = False
+ self.handler = None
+ self.taskqueue = []
+
+ def shutdown(self):
+ if not self.finished:
+ self.multihandler.shutdown_torrent(self.info_hash)
+
+ def _shutdown(self):
+ if not self.finished:
+ self.finished = True
+ self.running = False
+ self.rawserver.kill_tasks(self.info_hash)
+ if self.handler:
+ self.handler.close_all()
+
+ def _external_connection_made(self, c, options, already_read):
+ if self.running:
+ c.set_handler(self.handler)
+ self.handler.externally_handshaked_connection_made(
+ c, options, already_read)
+
+ ### RawServer functions ###
+
+ def add_task(self, func, delay=0, id = default_task_id):
+ if id is default_task_id:
+ id = self.info_hash
+ if not self.finished:
+ self.rawserver.add_task(func, delay, id)
+
+# def bind(self, port, bind = '', reuse = False):
+# pass # not handled here
+
+ def start_connection(self, dns, handler = None):
+ if not handler:
+ handler = self.handler
+ c = self.rawserver.start_connection(dns, handler)
+ return c
+
+# def listen_forever(self, handler):
+# pass # don't call with this
+
+ def start_listening(self, handler):
+ self.handler = handler
+ self.running = True
+ return self.shutdown # obviously, doesn't listen forever
+
+ def is_finished(self):
+ return self.finished
+
+ def get_exception_flag(self):
+ return self.rawserver.get_exception_flag()
+
+
+class NewSocketHandler: # hand a new socket off where it belongs
+ def __init__(self, multihandler, connection):
+ self.multihandler = multihandler
+ self.connection = connection
+ connection.set_handler(self)
+ self.closed = False
+ self.buffer = StringIO()
+ self.complete = False
+ self.next_len, self.next_func = 1, self.read_header_len
+ self.multihandler.rawserver.add_task(self._auto_close, 15)
+
+ def _auto_close(self):
+ if not self.complete:
+ self.close()
+
+ def close(self):
+ if not self.closed:
+ self.connection.close()
+ self.closed = True
+
+
+# header format:
+# connection.write(chr(len(protocol_name)) + protocol_name +
+# (chr(0) * 8) + self.encrypter.download_id + self.encrypter.my_id)
+
+ # copied from Encrypter and modified
+
+ def read_header_len(self, s):
+ l = ord(s)
+ return l, self.read_header
+
+ def read_header(self, s):
+ self.protocol = s
+ return 8, self.read_reserved
+
+ def read_reserved(self, s):
+ self.options = s
+ return 20, self.read_download_id
+
+ def read_download_id(self, s):
+ if self.multihandler.singlerawservers.has_key(s):
+ if self.multihandler.singlerawservers[s].protocol == self.protocol:
+ return True
+ return None
+
+ def read_dead(self, s):
+ return None
+
+ def data_came_in(self, garbage, s):
+ while True:
+ if self.closed:
+ return
+ i = self.next_len - self.buffer.tell()
+ if i > len(s):
+ self.buffer.write(s)
+ return
+ self.buffer.write(s[:i])
+ s = s[i:]
+ m = self.buffer.getvalue()
+ self.buffer.reset()
+ self.buffer.truncate()
+ try:
+ x = self.next_func(m)
+ except:
+ self.next_len, self.next_func = 1, self.read_dead
+ raise
+ if x is None:
+ self.close()
+ return
+ if x == True: # ready to process
+ self.multihandler.singlerawservers[m]._external_connection_made(
+ self.connection, self.options, s)
+ self.complete = True
+ return
+ self.next_len, self.next_func = x
+
+ def connection_flushed(self, ss):
+ pass
+
+ def connection_lost(self, ss):
+ self.closed = True
+
+class MultiHandler:
+ def __init__(self, rawserver, doneflag):
+ self.rawserver = rawserver
+ self.masterdoneflag = doneflag
+ self.singlerawservers = {}
+ self.connections = {}
+ self.taskqueues = {}
+
+ def newRawServer(self, info_hash, doneflag, protocol=protocol_name):
+ new = SingleRawServer(info_hash, self, doneflag, protocol)
+ self.singlerawservers[info_hash] = new
+ return new
+
+ def shutdown_torrent(self, info_hash):
+ self.singlerawservers[info_hash]._shutdown()
+ del self.singlerawservers[info_hash]
+
+ def listen_forever(self):
+ self.rawserver.listen_forever(self)
+ for srs in self.singlerawservers.values():
+ srs.finished = True
+ srs.running = False
+ srs.doneflag.set()
+
+ ### RawServer handler functions ###
+ # be wary of name collisions
+
+ def external_connection_made(self, ss):
+ NewSocketHandler(self, ss)
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+import socket
+from errno import EWOULDBLOCK, ECONNREFUSED, EHOSTUNREACH
+try:
+ from select import poll, error, POLLIN, POLLOUT, POLLERR, POLLHUP
+ timemult = 1000
+except ImportError:
+ from selectpoll import poll, error, POLLIN, POLLOUT, POLLERR, POLLHUP
+ timemult = 1
+from time import sleep
+from clock import clock
+import sys
+from random import shuffle, randrange
+from natpunch import UPnP_open_port, UPnP_close_port
+# from BT1.StreamCheck import StreamCheck
+# import inspect
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+all = POLLIN | POLLOUT
+
+UPnP_ERROR = "unable to forward port via UPnP"
+
+class SingleSocket:
+ def __init__(self, socket_handler, sock, handler, ip = None):
+ self.socket_handler = socket_handler
+ self.socket = sock
+ self.handler = handler
+ self.buffer = []
+ self.last_hit = clock()
+ self.fileno = sock.fileno()
+ self.connected = False
+ self.skipped = 0
+# self.check = StreamCheck()
+ try:
+ self.ip = self.socket.getpeername()[0]
+ except:
+ if ip is None:
+ self.ip = 'unknown'
+ else:
+ self.ip = ip
+
+ def get_ip(self, real=False):
+ if real:
+ try:
+ self.ip = self.socket.getpeername()[0]
+ except:
+ pass
+ return self.ip
+
+ def close(self):
+ '''
+ for x in xrange(5,0,-1):
+ try:
+ f = inspect.currentframe(x).f_code
+ print (f.co_filename,f.co_firstlineno,f.co_name)
+ del f
+ except:
+ pass
+ print ''
+ '''
+ assert self.socket
+ self.connected = False
+ sock = self.socket
+ self.socket = None
+ self.buffer = []
+ del self.socket_handler.single_sockets[self.fileno]
+ self.socket_handler.poll.unregister(sock)
+ sock.close()
+
+ def shutdown(self, val):
+ self.socket.shutdown(val)
+
+ def is_flushed(self):
+ return not self.buffer
+
+ def write(self, s):
+# self.check.write(s)
+ assert self.socket is not None
+ self.buffer.append(s)
+ if len(self.buffer) == 1:
+ self.try_write()
+
+ def try_write(self):
+ if self.connected:
+ dead = False
+ try:
+ while self.buffer:
+ buf = self.buffer[0]
+ amount = self.socket.send(buf)
+ if amount == 0:
+ self.skipped += 1
+ break
+ self.skipped = 0
+ if amount != len(buf):
+ self.buffer[0] = buf[amount:]
+ break
+ del self.buffer[0]
+ except socket.error, e:
+ try:
+ dead = e[0] != EWOULDBLOCK
+ except:
+ dead = True
+ self.skipped += 1
+ if self.skipped >= 3:
+ dead = True
+ if dead:
+ self.socket_handler.dead_from_write.append(self)
+ return
+ if self.buffer:
+ self.socket_handler.poll.register(self.socket, all)
+ else:
+ self.socket_handler.poll.register(self.socket, POLLIN)
+
+ def set_handler(self, handler):
+ self.handler = handler
+
+class SocketHandler:
+ def __init__(self, timeout, ipv6_enable, readsize = 100000):
+ self.timeout = timeout
+ self.ipv6_enable = ipv6_enable
+ self.readsize = readsize
+ self.poll = poll()
+ # {socket: SingleSocket}
+ self.single_sockets = {}
+ self.dead_from_write = []
+ self.max_connects = 1000
+ self.port_forwarded = None
+ self.servers = {}
+
+ def scan_for_timeouts(self):
+ t = clock() - self.timeout
+ tokill = []
+ for s in self.single_sockets.values():
+ if s.last_hit < t:
+ tokill.append(s)
+ for k in tokill:
+ if k.socket is not None:
+ self._close_socket(k)
+
+ def bind(self, port, bind = '', reuse = False, ipv6_socket_style = 1, upnp = 0):
+ port = int(port)
+ addrinfos = []
+ self.servers = {}
+ self.interfaces = []
+ # if bind != "" thread it as a comma seperated list and bind to all
+ # addresses (can be ips or hostnames) else bind to default ipv6 and
+ # ipv4 address
+ if bind:
+ if self.ipv6_enable:
+ socktype = socket.AF_UNSPEC
+ else:
+ socktype = socket.AF_INET
+ bind = bind.split(',')
+ for addr in bind:
+ if sys.version_info < (2,2):
+ addrinfos.append((socket.AF_INET, None, None, None, (addr, port)))
+ else:
+ addrinfos.extend(socket.getaddrinfo(addr, port,
+ socktype, socket.SOCK_STREAM))
+ else:
+ if self.ipv6_enable:
+ addrinfos.append([socket.AF_INET6, None, None, None, ('', port)])
+ if not addrinfos or ipv6_socket_style != 0:
+ addrinfos.append([socket.AF_INET, None, None, None, ('', port)])
+ for addrinfo in addrinfos:
+ try:
+ server = socket.socket(addrinfo[0], socket.SOCK_STREAM)
+ if reuse:
+ server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ server.setblocking(0)
+ server.bind(addrinfo[4])
+ self.servers[server.fileno()] = server
+ if bind:
+ self.interfaces.append(server.getsockname()[0])
+ server.listen(64)
+ self.poll.register(server, POLLIN)
+ except socket.error, e:
+ for server in self.servers.values():
+ try:
+ server.close()
+ except:
+ pass
+ if self.ipv6_enable and ipv6_socket_style == 0 and self.servers:
+ raise socket.error('blocked port (may require ipv6_binds_v4 to be set)')
+ raise socket.error(str(e))
+ if not self.servers:
+ raise socket.error('unable to open server port')
+ if upnp:
+ if not UPnP_open_port(port):
+ for server in self.servers.values():
+ try:
+ server.close()
+ except:
+ pass
+ self.servers = None
+ self.interfaces = None
+ raise socket.error(UPnP_ERROR)
+ self.port_forwarded = port
+ self.port = port
+
+ def find_and_bind(self, minport, maxport, bind = '', reuse = False,
+ ipv6_socket_style = 1, upnp = 0, randomizer = False):
+ e = 'maxport less than minport - no ports to check'
+ if maxport-minport < 50 or not randomizer:
+ portrange = range(minport, maxport+1)
+ if randomizer:
+ shuffle(portrange)
+ portrange = portrange[:20] # check a maximum of 20 ports
+ else:
+ portrange = []
+ while len(portrange) < 20:
+ listen_port = randrange(minport, maxport+1)
+ if not listen_port in portrange:
+ portrange.append(listen_port)
+ for listen_port in portrange:
+ try:
+ self.bind(listen_port, bind,
+ ipv6_socket_style = ipv6_socket_style, upnp = upnp)
+ return listen_port
+ except socket.error, e:
+ pass
+ raise socket.error(str(e))
+
+
+ def set_handler(self, handler):
+ self.handler = handler
+
+
+ def start_connection_raw(self, dns, socktype = socket.AF_INET, handler = None):
+ if handler is None:
+ handler = self.handler
+ sock = socket.socket(socktype, socket.SOCK_STREAM)
+ sock.setblocking(0)
+ try:
+ sock.connect_ex(dns)
+ except socket.error:
+ raise
+ except Exception, e:
+ raise socket.error(str(e))
+ self.poll.register(sock, POLLIN)
+ s = SingleSocket(self, sock, handler, dns[0])
+ self.single_sockets[sock.fileno()] = s
+ return s
+
+
+ def start_connection(self, dns, handler = None, randomize = False):
+ if handler is None:
+ handler = self.handler
+ if sys.version_info < (2,2):
+ s = self.start_connection_raw(dns,socket.AF_INET,handler)
+ else:
+ if self.ipv6_enable:
+ socktype = socket.AF_UNSPEC
+ else:
+ socktype = socket.AF_INET
+ try:
+ addrinfos = socket.getaddrinfo(dns[0], int(dns[1]),
+ socktype, socket.SOCK_STREAM)
+ except socket.error, e:
+ raise
+ except Exception, e:
+ raise socket.error(str(e))
+ if randomize:
+ shuffle(addrinfos)
+ for addrinfo in addrinfos:
+ try:
+ s = self.start_connection_raw(addrinfo[4],addrinfo[0],handler)
+ break
+ except:
+ pass
+ else:
+ raise socket.error('unable to connect')
+ return s
+
+
+ def _sleep(self):
+ sleep(1)
+
+ def handle_events(self, events):
+ for sock, event in events:
+ s = self.servers.get(sock)
+ if s:
+ if event & (POLLHUP | POLLERR) != 0:
+ self.poll.unregister(s)
+ s.close()
+ del self.servers[sock]
+ print "lost server socket"
+ elif len(self.single_sockets) < self.max_connects:
+ try:
+ newsock, addr = s.accept()
+ newsock.setblocking(0)
+ nss = SingleSocket(self, newsock, self.handler)
+ self.single_sockets[newsock.fileno()] = nss
+ self.poll.register(newsock, POLLIN)
+ self.handler.external_connection_made(nss)
+ except socket.error:
+ self._sleep()
+ else:
+ s = self.single_sockets.get(sock)
+ if not s:
+ continue
+ s.connected = True
+ if (event & (POLLHUP | POLLERR)):
+ self._close_socket(s)
+ continue
+ if (event & POLLIN):
+ try:
+ s.last_hit = clock()
+ data = s.socket.recv(100000)
+ if not data:
+ self._close_socket(s)
+ else:
+ s.handler.data_came_in(s, data)
+ except socket.error, e:
+ code, msg = e
+ if code != EWOULDBLOCK:
+ self._close_socket(s)
+ continue
+ if (event & POLLOUT) and s.socket and not s.is_flushed():
+ s.try_write()
+ if s.is_flushed():
+ s.handler.connection_flushed(s)
+
+ def close_dead(self):
+ while self.dead_from_write:
+ old = self.dead_from_write
+ self.dead_from_write = []
+ for s in old:
+ if s.socket:
+ self._close_socket(s)
+
+ def _close_socket(self, s):
+ s.close()
+ s.handler.connection_lost(s)
+
+ def do_poll(self, t):
+ r = self.poll.poll(t*timemult)
+ if r is None:
+ connects = len(self.single_sockets)
+ to_close = int(connects*0.05)+1 # close 5% of sockets
+ self.max_connects = connects-to_close
+ closelist = self.single_sockets.values()
+ shuffle(closelist)
+ closelist = closelist[:to_close]
+ for sock in closelist:
+ self._close_socket(sock)
+ return []
+ return r
+
+ def get_stats(self):
+ return { 'interfaces': self.interfaces,
+ 'port': self.port,
+ 'upnp': self.port_forwarded is not None }
+
+
+ def shutdown(self):
+ for ss in self.single_sockets.values():
+ try:
+ ss.close()
+ except:
+ pass
+ for server in self.servers.values():
+ try:
+ server.close()
+ except:
+ pass
+ if self.port_forwarded is not None:
+ UPnP_close_port(self.port_forwarded)
+
--- /dev/null
+product_name = 'BitTornado'
+version_short = 'T-0.3.17'
+
+version = version_short+' ('+product_name+')'
+report_email = version_short+'@degreez.net'
+
+from types import StringType
+from sha import sha
+from time import time, clock
+try:
+ from os import getpid
+except ImportError:
+ def getpid():
+ return 1
+
+mapbase64 = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz.-'
+
+_idprefix = version_short[0]
+for subver in version_short[2:].split('.'):
+ try:
+ subver = int(subver)
+ except:
+ subver = 0
+ _idprefix += mapbase64[subver]
+_idprefix += ('-' * (6-len(_idprefix)))
+_idrandom = [None]
+
+def resetPeerIDs():
+ try:
+ f = open('/dev/urandom','rb')
+ x = f.read(20)
+ f.close()
+ except:
+ x = ''
+
+ l1 = 0
+ t = clock()
+ while t == clock():
+ l1 += 1
+ l2 = 0
+ t = long(time()*100)
+ while t == long(time()*100):
+ l2 += 1
+ l3 = 0
+ if l2 < 1000:
+ t = long(time()*10)
+ while t == long(clock()*10):
+ l3 += 1
+ x += ( repr(time()) + '/' + str(time()) + '/'
+ + str(l1) + '/' + str(l2) + '/' + str(l3) + '/'
+ + str(getpid()) )
+
+ s = ''
+ for i in sha(x).digest()[-11:]:
+ s += mapbase64[ord(i) & 0x3F]
+ _idrandom[0] = s
+
+resetPeerIDs()
+
+def createPeerID(ins = '---'):
+ assert type(ins) is StringType
+ assert len(ins) == 3
+ return _idprefix + ins + _idrandom[0]
--- /dev/null
+# Written by Petru Paler, Uoti Urpala, Ross Cohen and John Hoffman
+# see LICENSE.txt for license information
+
+from types import IntType, LongType, StringType, ListType, TupleType, DictType
+try:
+ from types import BooleanType
+except ImportError:
+ BooleanType = None
+try:
+ from types import UnicodeType
+except ImportError:
+ UnicodeType = None
+from cStringIO import StringIO
+
+def decode_int(x, f):
+ f += 1
+ newf = x.index('e', f)
+ try:
+ n = int(x[f:newf])
+ except:
+ n = long(x[f:newf])
+ if x[f] == '-':
+ if x[f + 1] == '0':
+ raise ValueError
+ elif x[f] == '0' and newf != f+1:
+ raise ValueError
+ return (n, newf+1)
+
+def decode_string(x, f):
+ colon = x.index(':', f)
+ try:
+ n = int(x[f:colon])
+ except (OverflowError, ValueError):
+ n = long(x[f:colon])
+ if x[f] == '0' and colon != f+1:
+ raise ValueError
+ colon += 1
+ return (x[colon:colon+n], colon+n)
+
+def decode_unicode(x, f):
+ s, f = decode_string(x, f+1)
+ return (s.decode('UTF-8'),f)
+
+def decode_list(x, f):
+ r, f = [], f+1
+ while x[f] != 'e':
+ v, f = decode_func[x[f]](x, f)
+ r.append(v)
+ return (r, f + 1)
+
+def decode_dict(x, f):
+ r, f = {}, f+1
+ lastkey = None
+ while x[f] != 'e':
+ k, f = decode_string(x, f)
+ if lastkey >= k:
+ raise ValueError
+ lastkey = k
+ r[k], f = decode_func[x[f]](x, f)
+ return (r, f + 1)
+
+decode_func = {}
+decode_func['l'] = decode_list
+decode_func['d'] = decode_dict
+decode_func['i'] = decode_int
+decode_func['0'] = decode_string
+decode_func['1'] = decode_string
+decode_func['2'] = decode_string
+decode_func['3'] = decode_string
+decode_func['4'] = decode_string
+decode_func['5'] = decode_string
+decode_func['6'] = decode_string
+decode_func['7'] = decode_string
+decode_func['8'] = decode_string
+decode_func['9'] = decode_string
+#decode_func['u'] = decode_unicode
+
+def bdecode(x, sloppy = 0):
+ try:
+ r, l = decode_func[x[0]](x, 0)
+# except (IndexError, KeyError):
+ except (IndexError, KeyError, ValueError):
+ raise ValueError, "bad bencoded data"
+ if not sloppy and l != len(x):
+ raise ValueError, "bad bencoded data"
+ return r
+
+def test_bdecode():
+ try:
+ bdecode('0:0:')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('ie')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('i341foo382e')
+ assert 0
+ except ValueError:
+ pass
+ assert bdecode('i4e') == 4L
+ assert bdecode('i0e') == 0L
+ assert bdecode('i123456789e') == 123456789L
+ assert bdecode('i-10e') == -10L
+ try:
+ bdecode('i-0e')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('i123')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('i6easd')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('35208734823ljdahflajhdf')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('2:abfdjslhfld')
+ assert 0
+ except ValueError:
+ pass
+ assert bdecode('0:') == ''
+ assert bdecode('3:abc') == 'abc'
+ assert bdecode('10:1234567890') == '1234567890'
+ try:
+ bdecode('02:xy')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('l')
+ assert 0
+ except ValueError:
+ pass
+ assert bdecode('le') == []
+ try:
+ bdecode('leanfdldjfh')
+ assert 0
+ except ValueError:
+ pass
+ assert bdecode('l0:0:0:e') == ['', '', '']
+ try:
+ bdecode('relwjhrlewjh')
+ assert 0
+ except ValueError:
+ pass
+ assert bdecode('li1ei2ei3ee') == [1, 2, 3]
+ assert bdecode('l3:asd2:xye') == ['asd', 'xy']
+ assert bdecode('ll5:Alice3:Bobeli2ei3eee') == [['Alice', 'Bob'], [2, 3]]
+ try:
+ bdecode('d')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('defoobar')
+ assert 0
+ except ValueError:
+ pass
+ assert bdecode('de') == {}
+ assert bdecode('d3:agei25e4:eyes4:bluee') == {'age': 25, 'eyes': 'blue'}
+ assert bdecode('d8:spam.mp3d6:author5:Alice6:lengthi100000eee') == {'spam.mp3': {'author': 'Alice', 'length': 100000}}
+ try:
+ bdecode('d3:fooe')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('di1e0:e')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('d1:b0:1:a0:e')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('d1:a0:1:a0:e')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('i03e')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('l01:ae')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('9999:x')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('l0:')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('d0:0:')
+ assert 0
+ except ValueError:
+ pass
+ try:
+ bdecode('d0:')
+ assert 0
+ except ValueError:
+ pass
+
+bencached_marker = []
+
+class Bencached:
+ def __init__(self, s):
+ self.marker = bencached_marker
+ self.bencoded = s
+
+BencachedType = type(Bencached('')) # insufficient, but good as a filter
+
+def encode_bencached(x,r):
+ assert x.marker == bencached_marker
+ r.append(x.bencoded)
+
+def encode_int(x,r):
+ r.extend(('i',str(x),'e'))
+
+def encode_bool(x,r):
+ encode_int(int(x),r)
+
+def encode_string(x,r):
+ r.extend((str(len(x)),':',x))
+
+def encode_unicode(x,r):
+ #r.append('u')
+ encode_string(x.encode('UTF-8'),r)
+
+def encode_list(x,r):
+ r.append('l')
+ for e in x:
+ encode_func[type(e)](e, r)
+ r.append('e')
+
+def encode_dict(x,r):
+ r.append('d')
+ ilist = x.items()
+ ilist.sort()
+ for k,v in ilist:
+ r.extend((str(len(k)),':',k))
+ encode_func[type(v)](v, r)
+ r.append('e')
+
+encode_func = {}
+encode_func[BencachedType] = encode_bencached
+encode_func[IntType] = encode_int
+encode_func[LongType] = encode_int
+encode_func[StringType] = encode_string
+encode_func[ListType] = encode_list
+encode_func[TupleType] = encode_list
+encode_func[DictType] = encode_dict
+if BooleanType:
+ encode_func[BooleanType] = encode_bool
+if UnicodeType:
+ encode_func[UnicodeType] = encode_unicode
+
+def bencode(x):
+ r = []
+ try:
+ encode_func[type(x)](x, r)
+ except:
+ print "*** error *** could not encode type %s (value: %s)" % (type(x), x)
+ assert 0
+ return ''.join(r)
+
+def test_bencode():
+ assert bencode(4) == 'i4e'
+ assert bencode(0) == 'i0e'
+ assert bencode(-10) == 'i-10e'
+ assert bencode(12345678901234567890L) == 'i12345678901234567890e'
+ assert bencode('') == '0:'
+ assert bencode('abc') == '3:abc'
+ assert bencode('1234567890') == '10:1234567890'
+ assert bencode([]) == 'le'
+ assert bencode([1, 2, 3]) == 'li1ei2ei3ee'
+ assert bencode([['Alice', 'Bob'], [2, 3]]) == 'll5:Alice3:Bobeli2ei3eee'
+ assert bencode({}) == 'de'
+ assert bencode({'age': 25, 'eyes': 'blue'}) == 'd3:agei25e4:eyes4:bluee'
+ assert bencode({'spam.mp3': {'author': 'Alice', 'length': 100000}}) == 'd8:spam.mp3d6:author5:Alice6:lengthi100000eee'
+ try:
+ bencode({1: 'foo'})
+ assert 0
+ except AssertionError:
+ pass
+
+
+try:
+ import psyco
+ psyco.bind(bdecode)
+ psyco.bind(bencode)
+except ImportError:
+ pass
--- /dev/null
+# Written by Bram Cohen, Uoti Urpala, and John Hoffman
+# see LICENSE.txt for license information
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+ bool = lambda x: not not x
+
+try:
+ sum([1])
+ negsum = lambda a: len(a)-sum(a)
+except:
+ negsum = lambda a: reduce(lambda x,y: x+(not y), a, 0)
+
+def _int_to_booleans(x):
+ r = []
+ for i in range(8):
+ r.append(bool(x & 0x80))
+ x <<= 1
+ return tuple(r)
+
+lookup_table = []
+reverse_lookup_table = {}
+for i in xrange(256):
+ x = _int_to_booleans(i)
+ lookup_table.append(x)
+ reverse_lookup_table[x] = chr(i)
+
+
+class Bitfield:
+ def __init__(self, length = None, bitstring = None, copyfrom = None):
+ if copyfrom is not None:
+ self.length = copyfrom.length
+ self.array = copyfrom.array[:]
+ self.numfalse = copyfrom.numfalse
+ return
+ if length is None:
+ raise ValueError, "length must be provided unless copying from another array"
+ self.length = length
+ if bitstring is not None:
+ extra = len(bitstring) * 8 - length
+ if extra < 0 or extra >= 8:
+ raise ValueError
+ t = lookup_table
+ r = []
+ for c in bitstring:
+ r.extend(t[ord(c)])
+ if extra > 0:
+ if r[-extra:] != [0] * extra:
+ raise ValueError
+ del r[-extra:]
+ self.array = r
+ self.numfalse = negsum(r)
+ else:
+ self.array = [False] * length
+ self.numfalse = length
+
+ def __setitem__(self, index, val):
+ val = bool(val)
+ self.numfalse += self.array[index]-val
+ self.array[index] = val
+
+ def __getitem__(self, index):
+ return self.array[index]
+
+ def __len__(self):
+ return self.length
+
+ def tostring(self):
+ booleans = self.array
+ t = reverse_lookup_table
+ s = len(booleans) % 8
+ r = [ t[tuple(booleans[x:x+8])] for x in xrange(0, len(booleans)-s, 8) ]
+ if s:
+ r += t[tuple(booleans[-s:] + ([0] * (8-s)))]
+ return ''.join(r)
+
+ def complete(self):
+ return not self.numfalse
+
+
+def test_bitfield():
+ try:
+ x = Bitfield(7, 'ab')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(7, 'ab')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(9, 'abc')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(0, 'a')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(1, '')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(7, '')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(8, '')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(9, 'a')
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(7, chr(1))
+ assert False
+ except ValueError:
+ pass
+ try:
+ x = Bitfield(9, chr(0) + chr(0x40))
+ assert False
+ except ValueError:
+ pass
+ assert Bitfield(0, '').tostring() == ''
+ assert Bitfield(1, chr(0x80)).tostring() == chr(0x80)
+ assert Bitfield(7, chr(0x02)).tostring() == chr(0x02)
+ assert Bitfield(8, chr(0xFF)).tostring() == chr(0xFF)
+ assert Bitfield(9, chr(0) + chr(0x80)).tostring() == chr(0) + chr(0x80)
+ x = Bitfield(1)
+ assert x.numfalse == 1
+ x[0] = 1
+ assert x.numfalse == 0
+ x[0] = 1
+ assert x.numfalse == 0
+ assert x.tostring() == chr(0x80)
+ x = Bitfield(7)
+ assert len(x) == 7
+ x[6] = 1
+ assert x.numfalse == 6
+ assert x.tostring() == chr(0x02)
+ x = Bitfield(8)
+ x[7] = 1
+ assert x.tostring() == chr(1)
+ x = Bitfield(9)
+ x[8] = 1
+ assert x.numfalse == 8
+ assert x.tostring() == chr(0) + chr(0x80)
+ x = Bitfield(8, chr(0xC4))
+ assert len(x) == 8
+ assert x.numfalse == 5
+ assert x.tostring() == chr(0xC4)
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from time import *
+import sys
+
+_MAXFORWARD = 100
+_FUDGE = 1
+
+class RelativeTime:
+ def __init__(self):
+ self.time = time()
+ self.offset = 0
+
+ def get_time(self):
+ t = time() + self.offset
+ if t < self.time or t > self.time + _MAXFORWARD:
+ self.time += _FUDGE
+ self.offset += self.time - t
+ return self.time
+ self.time = t
+ return t
+
+if sys.platform != 'win32':
+ _RTIME = RelativeTime()
+ def clock():
+ return _RTIME.get_time()
\ No newline at end of file
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+from zurllib import urlopen
+from urlparse import urlparse
+from BT1.btformats import check_message
+from BT1.Choker import Choker
+from BT1.Storage import Storage
+from BT1.StorageWrapper import StorageWrapper
+from BT1.FileSelector import FileSelector
+from BT1.Uploader import Upload
+from BT1.Downloader import Downloader
+from BT1.HTTPDownloader import HTTPDownloader
+from BT1.Connecter import Connecter
+from RateLimiter import RateLimiter
+from BT1.Encrypter import Encoder
+from RawServer import RawServer, autodetect_ipv6, autodetect_socket_style
+from BT1.Rerequester import Rerequester
+from BT1.DownloaderFeedback import DownloaderFeedback
+from RateMeasure import RateMeasure
+from CurrentRateMeasure import Measure
+from BT1.PiecePicker import PiecePicker
+from BT1.Statistics import Statistics
+from ConfigDir import ConfigDir
+from bencode import bencode, bdecode
+from natpunch import UPnP_test
+from sha import sha
+from os import path, makedirs, listdir
+from parseargs import parseargs, formatDefinitions, defaultargs
+from socket import error as socketerror
+from random import seed
+from threading import Thread, Event
+from clock import clock
+from __init__ import createPeerID
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+defaults = [
+ ('max_uploads', 7,
+ "the maximum number of uploads to allow at once."),
+ ('keepalive_interval', 120.0,
+ 'number of seconds to pause between sending keepalives'),
+ ('download_slice_size', 2 ** 14,
+ "How many bytes to query for per request."),
+ ('upload_unit_size', 1460,
+ "when limiting upload rate, how many bytes to send at a time"),
+ ('request_backlog', 10,
+ "maximum number of requests to keep in a single pipe at once."),
+ ('max_message_length', 2 ** 23,
+ "maximum length prefix encoding you'll accept over the wire - larger values get the connection dropped."),
+ ('ip', '',
+ "ip to report you have to the tracker."),
+ ('minport', 10000, 'minimum port to listen on, counts up if unavailable'),
+ ('maxport', 60000, 'maximum port to listen on'),
+ ('random_port', 1, 'whether to choose randomly inside the port range ' +
+ 'instead of counting up linearly'),
+ ('responsefile', '',
+ 'file the server response was stored in, alternative to url'),
+ ('url', '',
+ 'url to get file from, alternative to responsefile'),
+ ('selector_enabled', 1,
+ 'whether to enable the file selector and fast resume function'),
+ ('expire_cache_data', 10,
+ 'the number of days after which you wish to expire old cache data ' +
+ '(0 = disabled)'),
+ ('priority', '',
+ 'a list of file priorities separated by commas, must be one per file, ' +
+ '0 = highest, 1 = normal, 2 = lowest, -1 = download disabled'),
+ ('saveas', '',
+ 'local file name to save the file as, null indicates query user'),
+ ('timeout', 300.0,
+ 'time to wait between closing sockets which nothing has been received on'),
+ ('timeout_check_interval', 60.0,
+ 'time to wait between checking if any connections have timed out'),
+ ('max_slice_length', 2 ** 17,
+ "maximum length slice to send to peers, larger requests are ignored"),
+ ('max_rate_period', 20.0,
+ "maximum amount of time to guess the current rate estimate represents"),
+ ('bind', '',
+ 'comma-separated list of ips/hostnames to bind to locally'),
+# ('ipv6_enabled', autodetect_ipv6(),
+ ('ipv6_enabled', 0,
+ 'allow the client to connect to peers via IPv6'),
+ ('ipv6_binds_v4', autodetect_socket_style(),
+ "set if an IPv6 server socket won't also field IPv4 connections"),
+ ('upnp_nat_access', 1,
+ 'attempt to autoconfigure a UPnP router to forward a server port ' +
+ '(0 = disabled, 1 = mode 1 [fast], 2 = mode 2 [slow])'),
+ ('upload_rate_fudge', 5.0,
+ 'time equivalent of writing to kernel-level TCP buffer, for rate adjustment'),
+ ('tcp_ack_fudge', 0.03,
+ 'how much TCP ACK download overhead to add to upload rate calculations ' +
+ '(0 = disabled)'),
+ ('display_interval', .5,
+ 'time between updates of displayed information'),
+ ('rerequest_interval', 5 * 60,
+ 'time to wait between requesting more peers'),
+ ('min_peers', 20,
+ 'minimum number of peers to not do rerequesting'),
+ ('http_timeout', 60,
+ 'number of seconds to wait before assuming that an http connection has timed out'),
+ ('max_initiate', 40,
+ 'number of peers at which to stop initiating new connections'),
+ ('check_hashes', 1,
+ 'whether to check hashes on disk'),
+ ('max_upload_rate', 0,
+ 'maximum kB/s to upload at (0 = no limit, -1 = automatic)'),
+ ('max_download_rate', 0,
+ 'maximum kB/s to download at (0 = no limit)'),
+ ('alloc_type', 'normal',
+ 'allocation type (may be normal, background, pre-allocate or sparse)'),
+ ('alloc_rate', 2.0,
+ 'rate (in MiB/s) to allocate space at using background allocation'),
+ ('buffer_reads', 1,
+ 'whether to buffer disk reads'),
+ ('write_buffer_size', 4,
+ 'the maximum amount of space to use for buffering disk writes ' +
+ '(in megabytes, 0 = disabled)'),
+ ('breakup_seed_bitfield', 1,
+ 'sends an incomplete bitfield and then fills with have messages, '
+ 'in order to get around stupid ISP manipulation'),
+ ('snub_time', 30.0,
+ "seconds to wait for data to come in over a connection before assuming it's semi-permanently choked"),
+ ('spew', 0,
+ "whether to display diagnostic info to stdout"),
+ ('rarest_first_cutoff', 2,
+ "number of downloads at which to switch from random to rarest first"),
+ ('rarest_first_priority_cutoff', 5,
+ 'the number of peers which need to have a piece before other partials take priority over rarest first'),
+ ('min_uploads', 4,
+ "the number of uploads to fill out to with extra optimistic unchokes"),
+ ('max_files_open', 50,
+ 'the maximum number of files to keep open at a time, 0 means no limit'),
+ ('round_robin_period', 30,
+ "the number of seconds between the client's switching upload targets"),
+ ('super_seeder', 0,
+ "whether to use special upload-efficiency-maximizing routines (only for dedicated seeds)"),
+ ('security', 1,
+ "whether to enable extra security features intended to prevent abuse"),
+ ('max_connections', 0,
+ "the absolute maximum number of peers to connect with (0 = no limit)"),
+ ('auto_kick', 1,
+ "whether to allow the client to automatically kick/ban peers that send bad data"),
+ ('double_check', 1,
+ "whether to double-check data being written to the disk for errors (may increase CPU load)"),
+ ('triple_check', 0,
+ "whether to thoroughly check data being written to the disk (may slow disk access)"),
+ ('lock_files', 1,
+ "whether to lock files the client is working with"),
+ ('lock_while_reading', 0,
+ "whether to lock access to files being read"),
+ ('auto_flush', 0,
+ "minutes between automatic flushes to disk (0 = disabled)"),
+ ('dedicated_seed_id', '',
+ "code to send to tracker identifying as a dedicated seed"),
+ ]
+
+argslistheader = 'Arguments are:\n\n'
+
+
+def _failfunc(x):
+ print x
+
+# old-style downloader
+def download(params, filefunc, statusfunc, finfunc, errorfunc, doneflag, cols,
+ pathFunc = None, presets = {}, exchandler = None,
+ failed = _failfunc, paramfunc = None):
+
+ try:
+ config = parse_params(params, presets)
+ except ValueError, e:
+ failed('error: ' + str(e) + '\nrun with no args for parameter explanations')
+ return
+ if not config:
+ errorfunc(get_usage())
+ return
+
+ myid = createPeerID()
+ seed(myid)
+
+ rawserver = RawServer(doneflag, config['timeout_check_interval'],
+ config['timeout'], ipv6_enable = config['ipv6_enabled'],
+ failfunc = failed, errorfunc = exchandler)
+
+ upnp_type = UPnP_test(config['upnp_nat_access'])
+ try:
+ listen_port = rawserver.find_and_bind(config['minport'], config['maxport'],
+ config['bind'], ipv6_socket_style = config['ipv6_binds_v4'],
+ upnp = upnp_type, randomizer = config['random_port'])
+ except socketerror, e:
+ failed("Couldn't listen - " + str(e))
+ return
+
+ response = get_response(config['responsefile'], config['url'], failed)
+ if not response:
+ return
+
+ infohash = sha(bencode(response['info'])).digest()
+
+ d = BT1Download(statusfunc, finfunc, errorfunc, exchandler, doneflag,
+ config, response, infohash, myid, rawserver, listen_port)
+
+ if not d.saveAs(filefunc):
+ return
+
+ if pathFunc:
+ pathFunc(d.getFilename())
+
+ hashcheck = d.initFiles(old_style = True)
+ if not hashcheck:
+ return
+ if not hashcheck():
+ return
+ if not d.startEngine():
+ return
+ d.startRerequester()
+ d.autoStats()
+
+ statusfunc(activity = 'connecting to peers')
+
+ if paramfunc:
+ paramfunc({ 'max_upload_rate' : d.setUploadRate, # change_max_upload_rate(<int KiB/sec>)
+ 'max_uploads': d.setConns, # change_max_uploads(<int max uploads>)
+ 'listen_port' : listen_port, # int
+ 'peer_id' : myid, # string
+ 'info_hash' : infohash, # string
+ 'start_connection' : d._startConnection, # start_connection((<string ip>, <int port>), <peer id>)
+ })
+
+ rawserver.listen_forever(d.getPortHandler())
+
+ d.shutdown()
+
+
+def parse_params(params, presets = {}):
+ if len(params) == 0:
+ return None
+ config, args = parseargs(params, defaults, 0, 1, presets = presets)
+ if args:
+ if config['responsefile'] or config['url']:
+ raise ValueError,'must have responsefile or url as arg or parameter, not both'
+ if path.isfile(args[0]):
+ config['responsefile'] = args[0]
+ else:
+ try:
+ urlparse(args[0])
+ except:
+ raise ValueError, 'bad filename or url'
+ config['url'] = args[0]
+ elif (config['responsefile'] == '') == (config['url'] == ''):
+ raise ValueError, 'need responsefile or url, must have one, cannot have both'
+ return config
+
+
+def get_usage(defaults = defaults, cols = 100, presets = {}):
+ return (argslistheader + formatDefinitions(defaults, cols, presets))
+
+
+def get_response(file, url, errorfunc):
+ try:
+ if file:
+ h = open(file, 'rb')
+ try:
+ line = h.read(10) # quick test to see if responsefile contains a dict
+ front,garbage = line.split(':',1)
+ assert front[0] == 'd'
+ int(front[1:])
+ except:
+ errorfunc(file+' is not a valid responsefile')
+ return None
+ try:
+ h.seek(0)
+ except:
+ try:
+ h.close()
+ except:
+ pass
+ h = open(file, 'rb')
+ else:
+ try:
+ h = urlopen(url)
+ except:
+ errorfunc(url+' bad url')
+ return None
+ response = h.read()
+
+ except IOError, e:
+ errorfunc('problem getting response info - ' + str(e))
+ return None
+ try:
+ h.close()
+ except:
+ pass
+ try:
+ try:
+ response = bdecode(response)
+ except:
+ errorfunc("warning: bad data in responsefile")
+ response = bdecode(response, sloppy=1)
+ check_message(response)
+ except ValueError, e:
+ errorfunc("got bad file info - " + str(e))
+ return None
+
+ return response
+
+
+class BT1Download:
+ def __init__(self, statusfunc, finfunc, errorfunc, excfunc, doneflag,
+ config, response, infohash, id, rawserver, port,
+ appdataobj = None):
+ self.statusfunc = statusfunc
+ self.finfunc = finfunc
+ self.errorfunc = errorfunc
+ self.excfunc = excfunc
+ self.doneflag = doneflag
+ self.config = config
+ self.response = response
+ self.infohash = infohash
+ self.myid = id
+ self.rawserver = rawserver
+ self.port = port
+
+ self.info = self.response['info']
+ self.pieces = [self.info['pieces'][x:x+20]
+ for x in xrange(0, len(self.info['pieces']), 20)]
+ self.len_pieces = len(self.pieces)
+ self.argslistheader = argslistheader
+ self.unpauseflag = Event()
+ self.unpauseflag.set()
+ self.downloader = None
+ self.storagewrapper = None
+ self.fileselector = None
+ self.super_seeding_active = False
+ self.filedatflag = Event()
+ self.spewflag = Event()
+ self.superseedflag = Event()
+ self.whenpaused = None
+ self.finflag = Event()
+ self.rerequest = None
+ self.tcp_ack_fudge = config['tcp_ack_fudge']
+
+ self.selector_enabled = config['selector_enabled']
+ if appdataobj:
+ self.appdataobj = appdataobj
+ elif self.selector_enabled:
+ self.appdataobj = ConfigDir()
+ self.appdataobj.deleteOldCacheData( config['expire_cache_data'],
+ [self.infohash] )
+
+ self.excflag = self.rawserver.get_exception_flag()
+ self.failed = False
+ self.checking = False
+ self.started = False
+
+ self.picker = PiecePicker(self.len_pieces, config['rarest_first_cutoff'],
+ config['rarest_first_priority_cutoff'])
+ self.choker = Choker(config, rawserver.add_task,
+ self.picker, self.finflag.isSet)
+
+
+ def checkSaveLocation(self, loc):
+ if self.info.has_key('length'):
+ return path.exists(loc)
+ for x in self.info['files']:
+ if path.exists(path.join(loc, x['path'][0])):
+ return True
+ return False
+
+
+ def saveAs(self, filefunc, pathfunc = None):
+ try:
+ def make(f, forcedir = False):
+ if not forcedir:
+ f = path.split(f)[0]
+ if f != '' and not path.exists(f):
+ makedirs(f)
+
+ if self.info.has_key('length'):
+ file_length = self.info['length']
+ file = filefunc(self.info['name'], file_length,
+ self.config['saveas'], False)
+ if file is None:
+ return None
+ make(file)
+ files = [(file, file_length)]
+ else:
+ file_length = 0L
+ for x in self.info['files']:
+ file_length += x['length']
+ file = filefunc(self.info['name'], file_length,
+ self.config['saveas'], True)
+ if file is None:
+ return None
+
+ # if this path exists, and no files from the info dict exist, we assume it's a new download and
+ # the user wants to create a new directory with the default name
+ existing = 0
+ if path.exists(file):
+ if not path.isdir(file):
+ self.errorfunc(file + 'is not a dir')
+ return None
+ if len(listdir(file)) > 0: # if it's not empty
+ for x in self.info['files']:
+ if path.exists(path.join(file, x['path'][0])):
+ existing = 1
+ if not existing:
+ file = path.join(file, self.info['name'])
+ if path.exists(file) and not path.isdir(file):
+ if file[-8:] == '.torrent':
+ file = file[:-8]
+ if path.exists(file) and not path.isdir(file):
+ self.errorfunc("Can't create dir - " + self.info['name'])
+ return None
+ make(file, True)
+
+ # alert the UI to any possible change in path
+ if pathfunc != None:
+ pathfunc(file)
+
+ files = []
+ for x in self.info['files']:
+ n = file
+ for i in x['path']:
+ n = path.join(n, i)
+ files.append((n, x['length']))
+ make(n)
+ except OSError, e:
+ self.errorfunc("Couldn't allocate dir - " + str(e))
+ return None
+
+ self.filename = file
+ self.files = files
+ self.datalength = file_length
+
+ return file
+
+
+ def getFilename(self):
+ return self.filename
+
+
+ def _finished(self):
+ self.finflag.set()
+ try:
+ self.storage.set_readonly()
+ except (IOError, OSError), e:
+ self.errorfunc('trouble setting readonly at end - ' + str(e))
+ if self.superseedflag.isSet():
+ self._set_super_seed()
+ self.choker.set_round_robin_period(
+ max( self.config['round_robin_period'],
+ self.config['round_robin_period'] *
+ self.info['piece length'] / 200000 ) )
+ self.rerequest_complete()
+ self.finfunc()
+
+ def _data_flunked(self, amount, index):
+ self.ratemeasure_datarejected(amount)
+ if not self.doneflag.isSet():
+ self.errorfunc('piece %d failed hash check, re-downloading it' % index)
+
+ def _failed(self, reason):
+ self.failed = True
+ self.doneflag.set()
+ if reason is not None:
+ self.errorfunc(reason)
+
+
+ def initFiles(self, old_style = False, statusfunc = None):
+ if self.doneflag.isSet():
+ return None
+ if not statusfunc:
+ statusfunc = self.statusfunc
+
+ disabled_files = None
+ if self.selector_enabled:
+ self.priority = self.config['priority']
+ if self.priority:
+ try:
+ self.priority = self.priority.split(',')
+ assert len(self.priority) == len(self.files)
+ self.priority = [int(p) for p in self.priority]
+ for p in self.priority:
+ assert p >= -1
+ assert p <= 2
+ except:
+ self.errorfunc('bad priority list given, ignored')
+ self.priority = None
+
+ data = self.appdataobj.getTorrentData(self.infohash)
+ try:
+ d = data['resume data']['priority']
+ assert len(d) == len(self.files)
+ disabled_files = [x == -1 for x in d]
+ except:
+ try:
+ disabled_files = [x == -1 for x in self.priority]
+ except:
+ pass
+
+ try:
+ try:
+ self.storage = Storage(self.files, self.info['piece length'],
+ self.doneflag, self.config, disabled_files)
+ except IOError, e:
+ self.errorfunc('trouble accessing files - ' + str(e))
+ return None
+ if self.doneflag.isSet():
+ return None
+
+ self.storagewrapper = StorageWrapper(self.storage, self.config['download_slice_size'],
+ self.pieces, self.info['piece length'], self._finished, self._failed,
+ statusfunc, self.doneflag, self.config['check_hashes'],
+ self._data_flunked, self.rawserver.add_task,
+ self.config, self.unpauseflag)
+
+ except ValueError, e:
+ self._failed('bad data - ' + str(e))
+ except IOError, e:
+ self._failed('IOError - ' + str(e))
+ if self.doneflag.isSet():
+ return None
+
+ if self.selector_enabled:
+ self.fileselector = FileSelector(self.files, self.info['piece length'],
+ self.appdataobj.getPieceDir(self.infohash),
+ self.storage, self.storagewrapper,
+ self.rawserver.add_task,
+ self._failed)
+ if data:
+ data = data.get('resume data')
+ if data:
+ self.fileselector.unpickle(data)
+
+ self.checking = True
+ if old_style:
+ return self.storagewrapper.old_style_init()
+ return self.storagewrapper.initialize
+
+
+ def getCachedTorrentData(self):
+ return self.appdataobj.getTorrentData(self.infohash)
+
+
+ def _make_upload(self, connection, ratelimiter, totalup):
+ return Upload(connection, ratelimiter, totalup,
+ self.choker, self.storagewrapper, self.picker,
+ self.config)
+
+ def _kick_peer(self, connection):
+ def k(connection = connection):
+ connection.close()
+ self.rawserver.add_task(k,0)
+
+ def _ban_peer(self, ip):
+ self.encoder_ban(ip)
+
+ def _received_raw_data(self, x):
+ if self.tcp_ack_fudge:
+ x = int(x*self.tcp_ack_fudge)
+ self.ratelimiter.adjust_sent(x)
+# self.upmeasure.update_rate(x)
+
+ def _received_data(self, x):
+ self.downmeasure.update_rate(x)
+ self.ratemeasure.data_came_in(x)
+
+ def _received_http_data(self, x):
+ self.downmeasure.update_rate(x)
+ self.ratemeasure.data_came_in(x)
+ self.downloader.external_data_received(x)
+
+ def _cancelfunc(self, pieces):
+ self.downloader.cancel_piece_download(pieces)
+ self.httpdownloader.cancel_piece_download(pieces)
+ def _reqmorefunc(self, pieces):
+ self.downloader.requeue_piece_download(pieces)
+
+ def startEngine(self, ratelimiter = None, statusfunc = None):
+ if self.doneflag.isSet():
+ return False
+ if not statusfunc:
+ statusfunc = self.statusfunc
+
+ self.checking = False
+
+ for i in xrange(self.len_pieces):
+ if self.storagewrapper.do_I_have(i):
+ self.picker.complete(i)
+ self.upmeasure = Measure(self.config['max_rate_period'],
+ self.config['upload_rate_fudge'])
+ self.downmeasure = Measure(self.config['max_rate_period'])
+
+ if ratelimiter:
+ self.ratelimiter = ratelimiter
+ else:
+ self.ratelimiter = RateLimiter(self.rawserver.add_task,
+ self.config['upload_unit_size'],
+ self.setConns)
+ self.ratelimiter.set_upload_rate(self.config['max_upload_rate'])
+
+ self.ratemeasure = RateMeasure()
+ self.ratemeasure_datarejected = self.ratemeasure.data_rejected
+
+ self.downloader = Downloader(self.storagewrapper, self.picker,
+ self.config['request_backlog'], self.config['max_rate_period'],
+ self.len_pieces, self.config['download_slice_size'],
+ self._received_data, self.config['snub_time'], self.config['auto_kick'],
+ self._kick_peer, self._ban_peer)
+ self.downloader.set_download_rate(self.config['max_download_rate'])
+ self.connecter = Connecter(self._make_upload, self.downloader, self.choker,
+ self.len_pieces, self.upmeasure, self.config,
+ self.ratelimiter, self.rawserver.add_task)
+ self.encoder = Encoder(self.connecter, self.rawserver,
+ self.myid, self.config['max_message_length'], self.rawserver.add_task,
+ self.config['keepalive_interval'], self.infohash,
+ self._received_raw_data, self.config)
+ self.encoder_ban = self.encoder.ban
+
+ self.httpdownloader = HTTPDownloader(self.storagewrapper, self.picker,
+ self.rawserver, self.finflag, self.errorfunc, self.downloader,
+ self.config['max_rate_period'], self.infohash, self._received_http_data,
+ self.connecter.got_piece)
+ if self.response.has_key('httpseeds') and not self.finflag.isSet():
+ for u in self.response['httpseeds']:
+ self.httpdownloader.make_download(u)
+
+ if self.selector_enabled:
+ self.fileselector.tie_in(self.picker, self._cancelfunc,
+ self._reqmorefunc, self.rerequest_ondownloadmore)
+ if self.priority:
+ self.fileselector.set_priorities_now(self.priority)
+ self.appdataobj.deleteTorrentData(self.infohash)
+ # erase old data once you've started modifying it
+
+ if self.config['super_seeder']:
+ self.set_super_seed()
+
+ self.started = True
+ return True
+
+
+ def rerequest_complete(self):
+ if self.rerequest:
+ self.rerequest.announce(1)
+
+ def rerequest_stopped(self):
+ if self.rerequest:
+ self.rerequest.announce(2)
+
+ def rerequest_lastfailed(self):
+ if self.rerequest:
+ return self.rerequest.last_failed
+ return False
+
+ def rerequest_ondownloadmore(self):
+ if self.rerequest:
+ self.rerequest.hit()
+
+ def startRerequester(self, seededfunc = None, force_rapid_update = False):
+ if self.response.has_key('announce-list'):
+ trackerlist = self.response['announce-list']
+ else:
+ trackerlist = [[self.response['announce']]]
+
+ self.rerequest = Rerequester(trackerlist, self.config['rerequest_interval'],
+ self.rawserver.add_task, self.connecter.how_many_connections,
+ self.config['min_peers'], self.encoder.start_connections,
+ self.rawserver.add_task, self.storagewrapper.get_amount_left,
+ self.upmeasure.get_total, self.downmeasure.get_total, self.port, self.config['ip'],
+ self.myid, self.infohash, self.config['http_timeout'],
+ self.errorfunc, self.excfunc, self.config['max_initiate'],
+ self.doneflag, self.upmeasure.get_rate, self.downmeasure.get_rate,
+ self.unpauseflag, self.config['dedicated_seed_id'],
+ seededfunc, force_rapid_update )
+
+ self.rerequest.start()
+
+
+ def _init_stats(self):
+ self.statistics = Statistics(self.upmeasure, self.downmeasure,
+ self.connecter, self.httpdownloader, self.ratelimiter,
+ self.rerequest_lastfailed, self.filedatflag)
+ if self.info.has_key('files'):
+ self.statistics.set_dirstats(self.files, self.info['piece length'])
+ if self.config['spew']:
+ self.spewflag.set()
+
+ def autoStats(self, displayfunc = None):
+ if not displayfunc:
+ displayfunc = self.statusfunc
+
+ self._init_stats()
+ DownloaderFeedback(self.choker, self.httpdownloader, self.rawserver.add_task,
+ self.upmeasure.get_rate, self.downmeasure.get_rate,
+ self.ratemeasure, self.storagewrapper.get_stats,
+ self.datalength, self.finflag, self.spewflag, self.statistics,
+ displayfunc, self.config['display_interval'])
+
+ def startStats(self):
+ self._init_stats()
+ d = DownloaderFeedback(self.choker, self.httpdownloader, self.rawserver.add_task,
+ self.upmeasure.get_rate, self.downmeasure.get_rate,
+ self.ratemeasure, self.storagewrapper.get_stats,
+ self.datalength, self.finflag, self.spewflag, self.statistics)
+ return d.gather
+
+
+ def getPortHandler(self):
+ return self.encoder
+
+
+ def shutdown(self, torrentdata = {}):
+ if self.checking or self.started:
+ self.storagewrapper.sync()
+ self.storage.close()
+ self.rerequest_stopped()
+ if self.fileselector and self.started:
+ if not self.failed:
+ self.fileselector.finish()
+ torrentdata['resume data'] = self.fileselector.pickle()
+ try:
+ self.appdataobj.writeTorrentData(self.infohash,torrentdata)
+ except:
+ self.appdataobj.deleteTorrentData(self.infohash) # clear it
+ return not self.failed and not self.excflag.isSet()
+ # if returns false, you may wish to auto-restart the torrent
+
+
+ def setUploadRate(self, rate):
+ try:
+ def s(self = self, rate = rate):
+ self.config['max_upload_rate'] = rate
+ self.ratelimiter.set_upload_rate(rate)
+ self.rawserver.add_task(s)
+ except AttributeError:
+ pass
+
+ def setConns(self, conns, conns2 = None):
+ if not conns2:
+ conns2 = conns
+ try:
+ def s(self = self, conns = conns, conns2 = conns2):
+ self.config['min_uploads'] = conns
+ self.config['max_uploads'] = conns2
+ if (conns > 30):
+ self.config['max_initiate'] = conns + 10
+ self.rawserver.add_task(s)
+ except AttributeError:
+ pass
+
+ def setDownloadRate(self, rate):
+ try:
+ def s(self = self, rate = rate):
+ self.config['max_download_rate'] = rate
+ self.downloader.set_download_rate(rate)
+ self.rawserver.add_task(s)
+ except AttributeError:
+ pass
+
+ def startConnection(self, ip, port, id):
+ self.encoder._start_connection((ip, port), id)
+
+ def _startConnection(self, ipandport, id):
+ self.encoder._start_connection(ipandport, id)
+
+ def setInitiate(self, initiate):
+ try:
+ def s(self = self, initiate = initiate):
+ self.config['max_initiate'] = initiate
+ self.rawserver.add_task(s)
+ except AttributeError:
+ pass
+
+ def getConfig(self):
+ return self.config
+
+ def getDefaults(self):
+ return defaultargs(defaults)
+
+ def getUsageText(self):
+ return self.argslistheader
+
+ def reannounce(self, special = None):
+ try:
+ def r(self = self, special = special):
+ if special is None:
+ self.rerequest.announce()
+ else:
+ self.rerequest.announce(specialurl = special)
+ self.rawserver.add_task(r)
+ except AttributeError:
+ pass
+
+ def getResponse(self):
+ try:
+ return self.response
+ except:
+ return None
+
+# def Pause(self):
+# try:
+# if self.storagewrapper:
+# self.rawserver.add_task(self._pausemaker, 0)
+# except:
+# return False
+# self.unpauseflag.clear()
+# return True
+#
+# def _pausemaker(self):
+# self.whenpaused = clock()
+# self.unpauseflag.wait() # sticks a monkey wrench in the main thread
+#
+# def Unpause(self):
+# self.unpauseflag.set()
+# if self.whenpaused and clock()-self.whenpaused > 60:
+# def r(self = self):
+# self.rerequest.announce(3) # rerequest automatically if paused for >60 seconds
+# self.rawserver.add_task(r)
+
+ def Pause(self):
+ if not self.storagewrapper:
+ return False
+ self.unpauseflag.clear()
+ self.rawserver.add_task(self.onPause)
+ return True
+
+ def onPause(self):
+ self.whenpaused = clock()
+ if not self.downloader:
+ return
+ self.downloader.pause(True)
+ self.encoder.pause(True)
+ self.choker.pause(True)
+
+ def Unpause(self):
+ self.unpauseflag.set()
+ self.rawserver.add_task(self.onUnpause)
+
+ def onUnpause(self):
+ if not self.downloader:
+ return
+ self.downloader.pause(False)
+ self.encoder.pause(False)
+ self.choker.pause(False)
+ if self.rerequest and self.whenpaused and clock()-self.whenpaused > 60:
+ self.rerequest.announce(3) # rerequest automatically if paused for >60 seconds
+
+ def set_super_seed(self):
+ try:
+ self.superseedflag.set()
+ def s(self = self):
+ if self.finflag.isSet():
+ self._set_super_seed()
+ self.rawserver.add_task(s)
+ except AttributeError:
+ pass
+
+ def _set_super_seed(self):
+ if not self.super_seeding_active:
+ self.super_seeding_active = True
+ self.errorfunc(' ** SUPER-SEED OPERATION ACTIVE **\n' +
+ ' please set Max uploads so each peer gets 6-8 kB/s')
+ def s(self = self):
+ self.downloader.set_super_seed()
+ self.choker.set_super_seed()
+ self.rawserver.add_task(s)
+ if self.finflag.isSet(): # mode started when already finished
+ def r(self = self):
+ self.rerequest.announce(3) # so after kicking everyone off, reannounce
+ self.rawserver.add_task(r)
+
+ def am_I_finished(self):
+ return self.finflag.isSet()
+
+ def get_transfer_stats(self):
+ return self.upmeasure.get_total(), self.downmeasure.get_total()
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+'''
+reads/writes a Windows-style INI file
+format:
+
+ aa = "bb"
+ cc = 11
+
+ [eee]
+ ff = "gg"
+
+decodes to:
+d = { '': {'aa':'bb','cc':'11'}, 'eee': {'ff':'gg'} }
+
+the encoder can also take this as input:
+
+d = { 'aa': 'bb, 'cc': 11, 'eee': {'ff':'gg'} }
+
+though it will only decode in the above format. Keywords must be strings.
+Values that are strings are written surrounded by quotes, and the decoding
+routine automatically strips any.
+Booleans are written as integers. Anything else aside from string/int/float
+may have unpredictable results.
+'''
+
+from cStringIO import StringIO
+from traceback import print_exc
+from types import DictType, StringType
+try:
+ from types import BooleanType
+except ImportError:
+ BooleanType = None
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+DEBUG = False
+
+def ini_write(f, d, comment=''):
+ try:
+ a = {'':{}}
+ for k,v in d.items():
+ assert type(k) == StringType
+ k = k.lower()
+ if type(v) == DictType:
+ if DEBUG:
+ print 'new section:' +k
+ if k:
+ assert not a.has_key(k)
+ a[k] = {}
+ aa = a[k]
+ for kk,vv in v:
+ assert type(kk) == StringType
+ kk = kk.lower()
+ assert not aa.has_key(kk)
+ if type(vv) == BooleanType:
+ vv = int(vv)
+ if type(vv) == StringType:
+ vv = '"'+vv+'"'
+ aa[kk] = str(vv)
+ if DEBUG:
+ print 'a['+k+']['+kk+'] = '+str(vv)
+ else:
+ aa = a['']
+ assert not aa.has_key(k)
+ if type(v) == BooleanType:
+ v = int(v)
+ if type(v) == StringType:
+ v = '"'+v+'"'
+ aa[k] = str(v)
+ if DEBUG:
+ print 'a[\'\']['+k+'] = '+str(v)
+ r = open(f,'w')
+ if comment:
+ for c in comment.split('\n'):
+ r.write('# '+c+'\n')
+ r.write('\n')
+ l = a.keys()
+ l.sort()
+ for k in l:
+ if k:
+ r.write('\n['+k+']\n')
+ aa = a[k]
+ ll = aa.keys()
+ ll.sort()
+ for kk in ll:
+ r.write(kk+' = '+aa[kk]+'\n')
+ success = True
+ except:
+ if DEBUG:
+ print_exc()
+ success = False
+ try:
+ r.close()
+ except:
+ pass
+ return success
+
+
+if DEBUG:
+ def errfunc(lineno, line, err):
+ print '('+str(lineno)+') '+err+': '+line
+else:
+ errfunc = lambda lineno, line, err: None
+
+def ini_read(f, errfunc = errfunc):
+ try:
+ r = open(f,'r')
+ ll = r.readlines()
+ d = {}
+ dd = {'':d}
+ for i in xrange(len(ll)):
+ l = ll[i]
+ l = l.strip()
+ if not l:
+ continue
+ if l[0] == '#':
+ continue
+ if l[0] == '[':
+ if l[-1] != ']':
+ errfunc(i,l,'syntax error')
+ continue
+ l1 = l[1:-1].strip().lower()
+ if not l1:
+ errfunc(i,l,'syntax error')
+ continue
+ if dd.has_key(l1):
+ errfunc(i,l,'duplicate section')
+ d = dd[l1]
+ continue
+ d = {}
+ dd[l1] = d
+ continue
+ try:
+ k,v = l.split('=',1)
+ except:
+ try:
+ k,v = l.split(':',1)
+ except:
+ errfunc(i,l,'syntax error')
+ continue
+ k = k.strip().lower()
+ v = v.strip()
+ if len(v) > 1 and ( (v[0] == '"' and v[-1] == '"') or
+ (v[0] == "'" and v[-1] == "'") ):
+ v = v[1:-1]
+ if not k:
+ errfunc(i,l,'syntax error')
+ continue
+ if d.has_key(k):
+ errfunc(i,l,'duplicate entry')
+ continue
+ d[k] = v
+ if DEBUG:
+ print dd
+ except:
+ if DEBUG:
+ print_exc()
+ dd = None
+ try:
+ r.close()
+ except:
+ pass
+ return dd
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from bisect import bisect, insort
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+ bool = lambda x: not not x
+
+
+def to_long_ipv4(ip):
+ ip = ip.split('.')
+ if len(ip) != 4:
+ raise ValueError, "bad address"
+ b = 0L
+ for n in ip:
+ b *= 256
+ b += int(n)
+ return b
+
+
+def to_long_ipv6(ip):
+ if ip == '':
+ raise ValueError, "bad address"
+ if ip == '::': # boundary handling
+ ip = ''
+ elif ip[:2] == '::':
+ ip = ip[1:]
+ elif ip[0] == ':':
+ raise ValueError, "bad address"
+ elif ip[-2:] == '::':
+ ip = ip[:-1]
+ elif ip[-1] == ':':
+ raise ValueError, "bad address"
+
+ b = []
+ doublecolon = False
+ for n in ip.split(':'):
+ if n == '': # double-colon
+ if doublecolon:
+ raise ValueError, "bad address"
+ doublecolon = True
+ b.append(None)
+ continue
+ if n.find('.') >= 0: # IPv4
+ n = n.split('.')
+ if len(n) != 4:
+ raise ValueError, "bad address"
+ for i in n:
+ b.append(int(i))
+ continue
+ n = ('0'*(4-len(n))) + n
+ b.append(int(n[:2],16))
+ b.append(int(n[2:],16))
+ bb = 0L
+ for n in b:
+ if n is None:
+ for i in xrange(17-len(b)):
+ bb *= 256
+ continue
+ bb *= 256
+ bb += n
+ return bb
+
+ipv4addrmask = 65535L*256*256*256*256
+
+class IP_List:
+ def __init__(self):
+ self.ipv4list = [] # starts of ranges
+ self.ipv4dict = {} # start: end of ranges
+ self.ipv6list = [] # "
+ self.ipv6dict = {} # "
+
+ def __nonzero__(self):
+ return bool(self.ipv4list or self.ipv6list)
+
+
+ def append(self, ip_beg, ip_end = None):
+ if ip_end is None:
+ ip_end = ip_beg
+ else:
+ assert ip_beg <= ip_end
+ if ip_beg.find(':') < 0: # IPv4
+ ip_beg = to_long_ipv4(ip_beg)
+ ip_end = to_long_ipv4(ip_end)
+ l = self.ipv4list
+ d = self.ipv4dict
+ else:
+ ip_beg = to_long_ipv6(ip_beg)
+ ip_end = to_long_ipv6(ip_end)
+ bb = ip_beg % (256*256*256*256)
+ if bb == ipv4addrmask:
+ ip_beg -= bb
+ ip_end -= bb
+ l = self.ipv4list
+ d = self.ipv4dict
+ else:
+ l = self.ipv6list
+ d = self.ipv6dict
+
+ pos = bisect(l,ip_beg)-1
+ done = pos < 0
+ while not done:
+ p = pos
+ while p < len(l):
+ range_beg = l[p]
+ if range_beg > ip_end+1:
+ done = True
+ break
+ range_end = d[range_beg]
+ if range_end < ip_beg-1:
+ p += 1
+ if p == len(l):
+ done = True
+ break
+ continue
+ # if neither of the above conditions is true, the ranges overlap
+ ip_beg = min(ip_beg, range_beg)
+ ip_end = max(ip_end, range_end)
+ del l[p]
+ del d[range_beg]
+ break
+
+ insort(l,ip_beg)
+ d[ip_beg] = ip_end
+
+
+ def includes(self, ip):
+ if not (self.ipv4list or self.ipv6list):
+ return False
+ if ip.find(':') < 0: # IPv4
+ ip = to_long_ipv4(ip)
+ l = self.ipv4list
+ d = self.ipv4dict
+ else:
+ ip = to_long_ipv6(ip)
+ bb = ip % (256*256*256*256)
+ if bb == ipv4addrmask:
+ ip -= bb
+ l = self.ipv4list
+ d = self.ipv4dict
+ else:
+ l = self.ipv6list
+ d = self.ipv6dict
+ for ip_beg in l[bisect(l,ip)-1:]:
+ if ip == ip_beg:
+ return True
+ ip_end = d[ip_beg]
+ if ip > ip_beg and ip <= ip_end:
+ return True
+ return False
+
+
+ # reads a list from a file in the format 'whatever:whatever:ip-ip'
+ # (not IPv6 compatible at all)
+ def read_rangelist(self, file):
+ f = open(file, 'r')
+ while True:
+ line = f.readline()
+ if not line:
+ break
+ line = line.strip()
+ if not line or line[0] == '#':
+ continue
+ line = line.split(':')[-1]
+ try:
+ ip1,ip2 = line.split('-')
+ except:
+ ip1 = line
+ ip2 = line
+ try:
+ self.append(ip1.strip(),ip2.strip())
+ except:
+ print '*** WARNING *** could not parse IP range: '+line
+ f.close()
+
+def is_ipv4(ip):
+ return ip.find(':') < 0
+
+def is_valid_ip(ip):
+ try:
+ if is_ipv4(ip):
+ a = ip.split('.')
+ assert len(a) == 4
+ for i in a:
+ chr(int(i))
+ return True
+ to_long_ipv6(ip)
+ return True
+ except:
+ return False
--- /dev/null
+#!/usr/bin/env python
+
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from BitTornado import PSYCO
+if PSYCO.psyco:
+ try:
+ import psyco
+ assert psyco.__version__ >= 0x010100f0
+ psyco.full()
+ except:
+ pass
+
+from download_bt1 import BT1Download
+from RawServer import RawServer, UPnP_ERROR
+from RateLimiter import RateLimiter
+from ServerPortHandler import MultiHandler
+from parsedir import parsedir
+from natpunch import UPnP_test
+from random import seed
+from socket import error as socketerror
+from threading import Event
+from sys import argv, exit
+import sys, os
+from clock import clock
+from __init__ import createPeerID, mapbase64, version
+from cStringIO import StringIO
+from traceback import print_exc
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+
+def fmttime(n):
+ try:
+ n = int(n) # n may be None or too large
+ assert n < 5184000 # 60 days
+ except:
+ return 'downloading'
+ m, s = divmod(n, 60)
+ h, m = divmod(m, 60)
+ return '%d:%02d:%02d' % (h, m, s)
+
+class SingleDownload:
+ def __init__(self, controller, hash, response, config, myid):
+ self.controller = controller
+ self.hash = hash
+ self.response = response
+ self.config = config
+
+ self.doneflag = Event()
+ self.waiting = True
+ self.checking = False
+ self.working = False
+ self.seed = False
+ self.closed = False
+
+ self.status_msg = ''
+ self.status_err = ['']
+ self.status_errtime = 0
+ self.status_done = 0.0
+
+ self.rawserver = controller.handler.newRawServer(hash, self.doneflag)
+
+ d = BT1Download(self.display, self.finished, self.error,
+ controller.exchandler, self.doneflag, config, response,
+ hash, myid, self.rawserver, controller.listen_port)
+ self.d = d
+
+ def start(self):
+ if not self.d.saveAs(self.saveAs):
+ self._shutdown()
+ return
+ self._hashcheckfunc = self.d.initFiles()
+ if not self._hashcheckfunc:
+ self._shutdown()
+ return
+ self.controller.hashchecksched(self.hash)
+
+
+ def saveAs(self, name, length, saveas, isdir):
+ return self.controller.saveAs(self.hash, name, saveas, isdir)
+
+ def hashcheck_start(self, donefunc):
+ if self.is_dead():
+ self._shutdown()
+ return
+ self.waiting = False
+ self.checking = True
+ self._hashcheckfunc(donefunc)
+
+ def hashcheck_callback(self):
+ self.checking = False
+ if self.is_dead():
+ self._shutdown()
+ return
+ if not self.d.startEngine(ratelimiter = self.controller.ratelimiter):
+ self._shutdown()
+ return
+ self.d.startRerequester()
+ self.statsfunc = self.d.startStats()
+ self.rawserver.start_listening(self.d.getPortHandler())
+ self.working = True
+
+ def is_dead(self):
+ return self.doneflag.isSet()
+
+ def _shutdown(self):
+ self.shutdown(False)
+
+ def shutdown(self, quiet=True):
+ if self.closed:
+ return
+ self.doneflag.set()
+ self.rawserver.shutdown()
+ if self.checking or self.working:
+ self.d.shutdown()
+ self.waiting = False
+ self.checking = False
+ self.working = False
+ self.closed = True
+ self.controller.was_stopped(self.hash)
+ if not quiet:
+ self.controller.died(self.hash)
+
+
+ def display(self, activity = None, fractionDone = None):
+ # really only used by StorageWrapper now
+ if activity:
+ self.status_msg = activity
+ if fractionDone is not None:
+ self.status_done = float(fractionDone)
+
+ def finished(self):
+ self.seed = True
+
+ def error(self, msg):
+ if self.doneflag.isSet():
+ self._shutdown()
+ self.status_err.append(msg)
+ self.status_errtime = clock()
+
+
+class LaunchMany:
+ def __init__(self, config, Output):
+ try:
+ self.config = config
+ self.Output = Output
+
+ self.torrent_dir = config['torrent_dir']
+ self.torrent_cache = {}
+ self.file_cache = {}
+ self.blocked_files = {}
+ self.scan_period = config['parse_dir_interval']
+ self.stats_period = config['display_interval']
+
+ self.torrent_list = []
+ self.downloads = {}
+ self.counter = 0
+ self.doneflag = Event()
+
+ self.hashcheck_queue = []
+ self.hashcheck_current = None
+
+ self.rawserver = RawServer(self.doneflag, config['timeout_check_interval'],
+ config['timeout'], ipv6_enable = config['ipv6_enabled'],
+ failfunc = self.failed, errorfunc = self.exchandler)
+ upnp_type = UPnP_test(config['upnp_nat_access'])
+ while True:
+ try:
+ self.listen_port = self.rawserver.find_and_bind(
+ config['minport'], config['maxport'], config['bind'],
+ ipv6_socket_style = config['ipv6_binds_v4'],
+ upnp = upnp_type, randomizer = config['random_port'])
+ break
+ except socketerror, e:
+ if upnp_type and e == UPnP_ERROR:
+ self.Output.message('WARNING: COULD NOT FORWARD VIA UPnP')
+ upnp_type = 0
+ continue
+ self.failed("Couldn't listen - " + str(e))
+ return
+
+ self.ratelimiter = RateLimiter(self.rawserver.add_task,
+ config['upload_unit_size'])
+ self.ratelimiter.set_upload_rate(config['max_upload_rate'])
+
+ self.handler = MultiHandler(self.rawserver, self.doneflag)
+ seed(createPeerID())
+ self.rawserver.add_task(self.scan, 0)
+ self.rawserver.add_task(self.stats, 0)
+
+ self.handler.listen_forever()
+
+ self.Output.message('shutting down')
+ self.hashcheck_queue = []
+ for hash in self.torrent_list:
+ self.Output.message('dropped "'+self.torrent_cache[hash]['path']+'"')
+ self.downloads[hash].shutdown()
+ self.rawserver.shutdown()
+
+ except:
+ data = StringIO()
+ print_exc(file = data)
+ Output.exception(data.getvalue())
+
+
+ def scan(self):
+ self.rawserver.add_task(self.scan, self.scan_period)
+
+ r = parsedir(self.torrent_dir, self.torrent_cache,
+ self.file_cache, self.blocked_files,
+ return_metainfo = True, errfunc = self.Output.message)
+
+ ( self.torrent_cache, self.file_cache, self.blocked_files,
+ added, removed ) = r
+
+ for hash, data in removed.items():
+ self.Output.message('dropped "'+data['path']+'"')
+ self.remove(hash)
+ for hash, data in added.items():
+ self.Output.message('added "'+data['path']+'"')
+ self.add(hash, data)
+
+ def stats(self):
+ self.rawserver.add_task(self.stats, self.stats_period)
+ data = []
+ for hash in self.torrent_list:
+ cache = self.torrent_cache[hash]
+ if self.config['display_path']:
+ name = cache['path']
+ else:
+ name = cache['name']
+ size = cache['length']
+ d = self.downloads[hash]
+ progress = '0.0%'
+ peers = 0
+ seeds = 0
+ seedsmsg = "S"
+ dist = 0.0
+ uprate = 0.0
+ dnrate = 0.0
+ upamt = 0
+ dnamt = 0
+ t = 0
+ if d.is_dead():
+ status = 'stopped'
+ elif d.waiting:
+ status = 'waiting for hash check'
+ elif d.checking:
+ status = d.status_msg
+ progress = '%.1f%%' % (d.status_done*100)
+ else:
+ stats = d.statsfunc()
+ s = stats['stats']
+ if d.seed:
+ status = 'seeding'
+ progress = '100.0%'
+ seeds = s.numOldSeeds
+ seedsmsg = "s"
+ dist = s.numCopies
+ else:
+ if s.numSeeds + s.numPeers:
+ t = stats['time']
+ if t == 0: # unlikely
+ t = 0.01
+ status = fmttime(t)
+ else:
+ t = -1
+ status = 'connecting to peers'
+ progress = '%.1f%%' % (int(stats['frac']*1000)/10.0)
+ seeds = s.numSeeds
+ dist = s.numCopies2
+ dnrate = stats['down']
+ peers = s.numPeers
+ uprate = stats['up']
+ upamt = s.upTotal
+ dnamt = s.downTotal
+
+ if d.is_dead() or d.status_errtime+300 > clock():
+ msg = d.status_err[-1]
+ else:
+ msg = ''
+
+ data.append(( name, status, progress, peers, seeds, seedsmsg, dist,
+ uprate, dnrate, upamt, dnamt, size, t, msg ))
+ stop = self.Output.display(data)
+ if stop:
+ self.doneflag.set()
+
+ def remove(self, hash):
+ self.torrent_list.remove(hash)
+ self.downloads[hash].shutdown()
+ del self.downloads[hash]
+
+ def add(self, hash, data):
+ c = self.counter
+ self.counter += 1
+ x = ''
+ for i in xrange(3):
+ x = mapbase64[c & 0x3F]+x
+ c >>= 6
+ peer_id = createPeerID(x)
+ d = SingleDownload(self, hash, data['metainfo'], self.config, peer_id)
+ self.torrent_list.append(hash)
+ self.downloads[hash] = d
+ d.start()
+
+
+ def saveAs(self, hash, name, saveas, isdir):
+ x = self.torrent_cache[hash]
+ style = self.config['saveas_style']
+ if style == 1 or style == 3:
+ if saveas:
+ saveas = os.path.join(saveas,x['file'][:-1-len(x['type'])])
+ else:
+ saveas = x['path'][:-1-len(x['type'])]
+ if style == 3:
+ if not os.path.isdir(saveas):
+ try:
+ os.mkdir(saveas)
+ except:
+ raise OSError("couldn't create directory for "+x['path']
+ +" ("+saveas+")")
+ if not isdir:
+ saveas = os.path.join(saveas, name)
+ else:
+ if saveas:
+ saveas = os.path.join(saveas, name)
+ else:
+ saveas = os.path.join(os.path.split(x['path'])[0], name)
+
+ if isdir and not os.path.isdir(saveas):
+ try:
+ os.mkdir(saveas)
+ except:
+ raise OSError("couldn't create directory for "+x['path']
+ +" ("+saveas+")")
+ return saveas
+
+
+ def hashchecksched(self, hash = None):
+ if hash:
+ self.hashcheck_queue.append(hash)
+ if not self.hashcheck_current:
+ self._hashcheck_start()
+
+ def _hashcheck_start(self):
+ self.hashcheck_current = self.hashcheck_queue.pop(0)
+ self.downloads[self.hashcheck_current].hashcheck_start(self.hashcheck_callback)
+
+ def hashcheck_callback(self):
+ self.downloads[self.hashcheck_current].hashcheck_callback()
+ if self.hashcheck_queue:
+ self._hashcheck_start()
+ else:
+ self.hashcheck_current = None
+
+ def died(self, hash):
+ if self.torrent_cache.has_key(hash):
+ self.Output.message('DIED: "'+self.torrent_cache[hash]['path']+'"')
+
+ def was_stopped(self, hash):
+ try:
+ self.hashcheck_queue.remove(hash)
+ except:
+ pass
+ if self.hashcheck_current == hash:
+ self.hashcheck_current = None
+ if self.hashcheck_queue:
+ self._hashcheck_start()
+
+ def failed(self, s):
+ self.Output.message('FAILURE: '+s)
+
+ def exchandler(self, s):
+ self.Output.exception(s)
--- /dev/null
+# Written by John Hoffman
+# derived from NATPortMapping.py by Yejun Yang
+# and from example code by Myers Carpenter
+# see LICENSE.txt for license information
+
+import socket
+from traceback import print_exc
+from subnetparse import IP_List
+from clock import clock
+from __init__ import createPeerID
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+DEBUG = False
+
+EXPIRE_CACHE = 30 # seconds
+ID = "BT-"+createPeerID()[-4:]
+
+try:
+ import pythoncom, win32com.client
+ _supported = 1
+except ImportError:
+ _supported = 0
+
+
+
+class _UPnP1: # derived from Myers Carpenter's code
+ # seems to use the machine's local UPnP
+ # system for its operation. Runs fairly fast
+
+ def __init__(self):
+ self.map = None
+ self.last_got_map = -10e10
+
+ def _get_map(self):
+ if self.last_got_map + EXPIRE_CACHE < clock():
+ try:
+ dispatcher = win32com.client.Dispatch("HNetCfg.NATUPnP")
+ self.map = dispatcher.StaticPortMappingCollection
+ self.last_got_map = clock()
+ except:
+ self.map = None
+ return self.map
+
+ def test(self):
+ try:
+ assert self._get_map() # make sure a map was found
+ success = True
+ except:
+ success = False
+ return success
+
+
+ def open(self, ip, p):
+ map = self._get_map()
+ try:
+ map.Add(p,'TCP',p,ip,True,ID)
+ if DEBUG:
+ print 'port opened: '+ip+':'+str(p)
+ success = True
+ except:
+ if DEBUG:
+ print "COULDN'T OPEN "+str(p)
+ print_exc()
+ success = False
+ return success
+
+
+ def close(self, p):
+ map = self._get_map()
+ try:
+ map.Remove(p,'TCP')
+ success = True
+ if DEBUG:
+ print 'port closed: '+str(p)
+ except:
+ if DEBUG:
+ print 'ERROR CLOSING '+str(p)
+ print_exc()
+ success = False
+ return success
+
+
+ def clean(self, retry = False):
+ if not _supported:
+ return
+ try:
+ map = self._get_map()
+ ports_in_use = []
+ for i in xrange(len(map)):
+ try:
+ mapping = map[i]
+ port = mapping.ExternalPort
+ prot = str(mapping.Protocol).lower()
+ desc = str(mapping.Description).lower()
+ except:
+ port = None
+ if port and prot == 'tcp' and desc[:3] == 'bt-':
+ ports_in_use.append(port)
+ success = True
+ for port in ports_in_use:
+ try:
+ map.Remove(port,'TCP')
+ except:
+ success = False
+ if not success and not retry:
+ self.clean(retry = True)
+ except:
+ pass
+
+
+class _UPnP2: # derived from Yejun Yang's code
+ # apparently does a direct search for UPnP hardware
+ # may work in some cases where _UPnP1 won't, but is slow
+ # still need to implement "clean" method
+
+ def __init__(self):
+ self.services = None
+ self.last_got_services = -10e10
+
+ def _get_services(self):
+ if not self.services or self.last_got_services + EXPIRE_CACHE < clock():
+ self.services = []
+ try:
+ f=win32com.client.Dispatch("UPnP.UPnPDeviceFinder")
+ for t in ( "urn:schemas-upnp-org:service:WANIPConnection:1",
+ "urn:schemas-upnp-org:service:WANPPPConnection:1" ):
+ try:
+ conns = f.FindByType(t,0)
+ for c in xrange(len(conns)):
+ try:
+ svcs = conns[c].Services
+ for s in xrange(len(svcs)):
+ try:
+ self.services.append(svcs[s])
+ except:
+ pass
+ except:
+ pass
+ except:
+ pass
+ except:
+ pass
+ self.last_got_services = clock()
+ return self.services
+
+ def test(self):
+ try:
+ assert self._get_services() # make sure some services can be found
+ success = True
+ except:
+ success = False
+ return success
+
+
+ def open(self, ip, p):
+ svcs = self._get_services()
+ success = False
+ for s in svcs:
+ try:
+ s.InvokeAction('AddPortMapping',['',p,'TCP',p,ip,True,ID,0],'')
+ success = True
+ except:
+ pass
+ if DEBUG and not success:
+ print "COULDN'T OPEN "+str(p)
+ print_exc()
+ return success
+
+
+ def close(self, p):
+ svcs = self._get_services()
+ success = False
+ for s in svcs:
+ try:
+ s.InvokeAction('DeletePortMapping', ['',p,'TCP'], '')
+ success = True
+ except:
+ pass
+ if DEBUG and not success:
+ print "COULDN'T OPEN "+str(p)
+ print_exc()
+ return success
+
+
+class _UPnP: # master holding class
+ def __init__(self):
+ self.upnp1 = _UPnP1()
+ self.upnp2 = _UPnP2()
+ self.upnplist = (None, self.upnp1, self.upnp2)
+ self.upnp = None
+ self.local_ip = None
+ self.last_got_ip = -10e10
+
+ def get_ip(self):
+ if self.last_got_ip + EXPIRE_CACHE < clock():
+ local_ips = IP_List()
+ local_ips.set_intranet_addresses()
+ try:
+ for info in socket.getaddrinfo(socket.gethostname(),0,socket.AF_INET):
+ # exception if socket library isn't recent
+ self.local_ip = info[4][0]
+ if local_ips.includes(self.local_ip):
+ self.last_got_ip = clock()
+ if DEBUG:
+ print 'Local IP found: '+self.local_ip
+ break
+ else:
+ raise ValueError('couldn\'t find intranet IP')
+ except:
+ self.local_ip = None
+ if DEBUG:
+ print 'Error finding local IP'
+ print_exc()
+ return self.local_ip
+
+ def test(self, upnp_type):
+ if DEBUG:
+ print 'testing UPnP type '+str(upnp_type)
+ if not upnp_type or not _supported or self.get_ip() is None:
+ if DEBUG:
+ print 'not supported'
+ return 0
+ pythoncom.CoInitialize() # leave initialized
+ self.upnp = self.upnplist[upnp_type] # cache this
+ if self.upnp.test():
+ if DEBUG:
+ print 'ok'
+ return upnp_type
+ if DEBUG:
+ print 'tested bad'
+ return 0
+
+ def open(self, p):
+ assert self.upnp, "must run UPnP_test() with the desired UPnP access type first"
+ return self.upnp.open(self.get_ip(), p)
+
+ def close(self, p):
+ assert self.upnp, "must run UPnP_test() with the desired UPnP access type first"
+ return self.upnp.close(p)
+
+ def clean(self):
+ return self.upnp1.clean()
+
+_upnp_ = _UPnP()
+
+UPnP_test = _upnp_.test
+UPnP_open_port = _upnp_.open
+UPnP_close_port = _upnp_.close
+UPnP_reset = _upnp_.clean
+
--- /dev/null
+# Written by Bill Bumgarner and Bram Cohen
+# see LICENSE.txt for license information
+
+from types import *
+from cStringIO import StringIO
+
+
+def splitLine(line, COLS=80, indent=10):
+ indent = " " * indent
+ width = COLS - (len(indent) + 1)
+ if indent and width < 15:
+ width = COLS - 2
+ indent = " "
+ s = StringIO()
+ i = 0
+ for word in line.split():
+ if i == 0:
+ s.write(indent+word)
+ i = len(word)
+ continue
+ if i + len(word) >= width:
+ s.write('\n'+indent+word)
+ i = len(word)
+ continue
+ s.write(' '+word)
+ i += len(word) + 1
+ return s.getvalue()
+
+def formatDefinitions(options, COLS, presets = {}):
+ s = StringIO()
+ for (longname, default, doc) in options:
+ s.write('--' + longname + ' <arg>\n')
+ default = presets.get(longname, default)
+ if type(default) in (IntType, LongType):
+ try:
+ default = int(default)
+ except:
+ pass
+ if default is not None:
+ doc += ' (defaults to ' + repr(default) + ')'
+ s.write(splitLine(doc,COLS,10))
+ s.write('\n\n')
+ return s.getvalue()
+
+
+def usage(str):
+ raise ValueError(str)
+
+
+def defaultargs(options):
+ l = {}
+ for (longname, default, doc) in options:
+ if default is not None:
+ l[longname] = default
+ return l
+
+
+def parseargs(argv, options, minargs = None, maxargs = None, presets = {}):
+ config = {}
+ longkeyed = {}
+ for option in options:
+ longname, default, doc = option
+ longkeyed[longname] = option
+ config[longname] = default
+ for longname in presets.keys(): # presets after defaults but before arguments
+ config[longname] = presets[longname]
+ options = []
+ args = []
+ pos = 0
+ while pos < len(argv):
+ if argv[pos][:2] != '--':
+ args.append(argv[pos])
+ pos += 1
+ else:
+ if pos == len(argv) - 1:
+ usage('parameter passed in at end with no value')
+ key, value = argv[pos][2:], argv[pos+1]
+ pos += 2
+ if not longkeyed.has_key(key):
+ usage('unknown key --' + key)
+ longname, default, doc = longkeyed[key]
+ try:
+ t = type(config[longname])
+ if t is NoneType or t is StringType:
+ config[longname] = value
+ elif t in (IntType, LongType):
+ config[longname] = long(value)
+ elif t is FloatType:
+ config[longname] = float(value)
+ else:
+ assert 0
+ except ValueError, e:
+ usage('wrong format of --%s - %s' % (key, str(e)))
+ for key, value in config.items():
+ if value is None:
+ usage("Option --%s is required." % key)
+ if minargs is not None and len(args) < minargs:
+ usage("Must supply at least %d args." % minargs)
+ if maxargs is not None and len(args) > maxargs:
+ usage("Too many args - %d max." % maxargs)
+ return (config, args)
+
+def test_parseargs():
+ assert parseargs(('d', '--a', 'pq', 'e', '--b', '3', '--c', '4.5', 'f'), (('a', 'x', ''), ('b', 1, ''), ('c', 2.3, ''))) == ({'a': 'pq', 'b': 3, 'c': 4.5}, ['d', 'e', 'f'])
+ assert parseargs([], [('a', 'x', '')]) == ({'a': 'x'}, [])
+ assert parseargs(['--a', 'x', '--a', 'y'], [('a', '', '')]) == ({'a': 'y'}, [])
+ try:
+ parseargs([], [('a', 'x', '')])
+ except ValueError:
+ pass
+ try:
+ parseargs(['--a', 'x'], [])
+ except ValueError:
+ pass
+ try:
+ parseargs(['--a'], [('a', 'x', '')])
+ except ValueError:
+ pass
+ try:
+ parseargs([], [], 1, 2)
+ except ValueError:
+ pass
+ assert parseargs(['x'], [], 1, 2) == ({}, ['x'])
+ assert parseargs(['x', 'y'], [], 1, 2) == ({}, ['x', 'y'])
+ try:
+ parseargs(['x', 'y', 'z'], [], 1, 2)
+ except ValueError:
+ pass
+ try:
+ parseargs(['--a', '2.0'], [('a', 3, '')])
+ except ValueError:
+ pass
+ try:
+ parseargs(['--a', 'z'], [('a', 2.1, '')])
+ except ValueError:
+ pass
+
--- /dev/null
+# Written by John Hoffman and Uoti Urpala
+# see LICENSE.txt for license information
+from bencode import bencode, bdecode
+from BT1.btformats import check_info
+from os.path import exists, isfile
+from sha import sha
+import sys, os
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+NOISY = False
+
+def _errfunc(x):
+ print ":: "+x
+
+def parsedir(directory, parsed, files, blocked,
+ exts = ['.torrent'], return_metainfo = False, errfunc = _errfunc):
+ if NOISY:
+ errfunc('checking dir')
+ dirs_to_check = [directory]
+ new_files = {}
+ new_blocked = {}
+ torrent_type = {}
+ while dirs_to_check: # first, recurse directories and gather torrents
+ directory = dirs_to_check.pop()
+ newtorrents = False
+ for f in os.listdir(directory):
+ newtorrent = None
+ for ext in exts:
+ if f.endswith(ext):
+ newtorrent = ext[1:]
+ break
+ if newtorrent:
+ newtorrents = True
+ p = os.path.join(directory, f)
+ new_files[p] = [(os.path.getmtime(p), os.path.getsize(p)), 0]
+ torrent_type[p] = newtorrent
+ if not newtorrents:
+ for f in os.listdir(directory):
+ p = os.path.join(directory, f)
+ if os.path.isdir(p):
+ dirs_to_check.append(p)
+
+ new_parsed = {}
+ to_add = []
+ added = {}
+ removed = {}
+ # files[path] = [(modification_time, size), hash], hash is 0 if the file
+ # has not been successfully parsed
+ for p,v in new_files.items(): # re-add old items and check for changes
+ oldval = files.get(p)
+ if not oldval: # new file
+ to_add.append(p)
+ continue
+ h = oldval[1]
+ if oldval[0] == v[0]: # file is unchanged from last parse
+ if h:
+ if blocked.has_key(p): # parseable + blocked means duplicate
+ to_add.append(p) # other duplicate may have gone away
+ else:
+ new_parsed[h] = parsed[h]
+ new_files[p] = oldval
+ else:
+ new_blocked[p] = 1 # same broken unparseable file
+ continue
+ if parsed.has_key(h) and not blocked.has_key(p):
+ if NOISY:
+ errfunc('removing '+p+' (will re-add)')
+ removed[h] = parsed[h]
+ to_add.append(p)
+
+ to_add.sort()
+ for p in to_add: # then, parse new and changed torrents
+ new_file = new_files[p]
+ v,h = new_file
+ if new_parsed.has_key(h): # duplicate
+ if not blocked.has_key(p) or files[p][0] != v:
+ errfunc('**warning** '+
+ p +' is a duplicate torrent for '+new_parsed[h]['path'])
+ new_blocked[p] = 1
+ continue
+
+ if NOISY:
+ errfunc('adding '+p)
+ try:
+ ff = open(p, 'rb')
+ d = bdecode(ff.read())
+ check_info(d['info'])
+ h = sha(bencode(d['info'])).digest()
+ new_file[1] = h
+ if new_parsed.has_key(h):
+ errfunc('**warning** '+
+ p +' is a duplicate torrent for '+new_parsed[h]['path'])
+ new_blocked[p] = 1
+ continue
+
+ a = {}
+ a['path'] = p
+ f = os.path.basename(p)
+ a['file'] = f
+ a['type'] = torrent_type[p]
+ i = d['info']
+ l = 0
+ nf = 0
+ if i.has_key('length'):
+ l = i.get('length',0)
+ nf = 1
+ elif i.has_key('files'):
+ for li in i['files']:
+ nf += 1
+ if li.has_key('length'):
+ l += li['length']
+ a['numfiles'] = nf
+ a['length'] = l
+ a['name'] = i.get('name', f)
+ def setkey(k, d = d, a = a):
+ if d.has_key(k):
+ a[k] = d[k]
+ setkey('failure reason')
+ setkey('warning message')
+ setkey('announce-list')
+ if return_metainfo:
+ a['metainfo'] = d
+ except:
+ errfunc('**warning** '+p+' has errors')
+ new_blocked[p] = 1
+ continue
+ try:
+ ff.close()
+ except:
+ pass
+ if NOISY:
+ errfunc('... successful')
+ new_parsed[h] = a
+ added[h] = a
+
+ for p,v in files.items(): # and finally, mark removed torrents
+ if not new_files.has_key(p) and not blocked.has_key(p):
+ if NOISY:
+ errfunc('removing '+p)
+ removed[v[1]] = parsed[v[1]]
+
+ if NOISY:
+ errfunc('done checking')
+ return (new_parsed, new_files, new_blocked, added, removed)
+
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from array import array
+from threading import Lock
+# import inspect
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+DEBUG = False
+
+class SingleBuffer:
+ def __init__(self, pool):
+ self.pool = pool
+ self.buf = array('c')
+
+ def init(self):
+ if DEBUG:
+ print self.count
+ '''
+ for x in xrange(6,1,-1):
+ try:
+ f = inspect.currentframe(x).f_code
+ print (f.co_filename,f.co_firstlineno,f.co_name)
+ del f
+ except:
+ pass
+ print ''
+ '''
+ self.length = 0
+
+ def append(self, s):
+ l = self.length+len(s)
+ self.buf[self.length:l] = array('c',s)
+ self.length = l
+
+ def __len__(self):
+ return self.length
+
+ def __getslice__(self, a, b):
+ if b > self.length:
+ b = self.length
+ if b < 0:
+ b += self.length
+ if a == 0 and b == self.length and len(self.buf) == b:
+ return self.buf # optimization
+ return self.buf[a:b]
+
+ def getarray(self):
+ return self.buf[:self.length]
+
+ def release(self):
+ if DEBUG:
+ print -self.count
+ self.pool.release(self)
+
+
+class BufferPool:
+ def __init__(self):
+ self.pool = []
+ self.lock = Lock()
+ if DEBUG:
+ self.count = 0
+
+ def new(self):
+ self.lock.acquire()
+ if self.pool:
+ x = self.pool.pop()
+ else:
+ x = SingleBuffer(self)
+ if DEBUG:
+ self.count += 1
+ x.count = self.count
+ x.init()
+ self.lock.release()
+ return x
+
+ def release(self, x):
+ self.pool.append(x)
+
+
+_pool = BufferPool()
+PieceBuffer = _pool.new
--- /dev/null
+# Written by Bram Cohen
+# see LICENSE.txt for license information
+
+from select import select, error
+from time import sleep
+from types import IntType
+from bisect import bisect
+POLLIN = 1
+POLLOUT = 2
+POLLERR = 8
+POLLHUP = 16
+
+class poll:
+ def __init__(self):
+ self.rlist = []
+ self.wlist = []
+
+ def register(self, f, t):
+ if type(f) != IntType:
+ f = f.fileno()
+ if (t & POLLIN):
+ insert(self.rlist, f)
+ else:
+ remove(self.rlist, f)
+ if (t & POLLOUT):
+ insert(self.wlist, f)
+ else:
+ remove(self.wlist, f)
+
+ def unregister(self, f):
+ if type(f) != IntType:
+ f = f.fileno()
+ remove(self.rlist, f)
+ remove(self.wlist, f)
+
+ def poll(self, timeout = None):
+ if self.rlist or self.wlist:
+ try:
+ r, w, e = select(self.rlist, self.wlist, [], timeout)
+ except ValueError:
+ return None
+ else:
+ sleep(timeout)
+ return []
+ result = []
+ for s in r:
+ result.append((s, POLLIN))
+ for s in w:
+ result.append((s, POLLOUT))
+ return result
+
+def remove(list, item):
+ i = bisect(list, item)
+ if i > 0 and list[i-1] == item:
+ del list[i-1]
+
+def insert(list, item):
+ i = bisect(list, item)
+ if i == 0 or list[i-1] != item:
+ list.insert(i, item)
+
+def test_remove():
+ x = [2, 4, 6]
+ remove(x, 2)
+ assert x == [4, 6]
+ x = [2, 4, 6]
+ remove(x, 4)
+ assert x == [2, 6]
+ x = [2, 4, 6]
+ remove(x, 6)
+ assert x == [2, 4]
+ x = [2, 4, 6]
+ remove(x, 5)
+ assert x == [2, 4, 6]
+ x = [2, 4, 6]
+ remove(x, 1)
+ assert x == [2, 4, 6]
+ x = [2, 4, 6]
+ remove(x, 7)
+ assert x == [2, 4, 6]
+ x = [2, 4, 6]
+ remove(x, 5)
+ assert x == [2, 4, 6]
+ x = []
+ remove(x, 3)
+ assert x == []
+
+def test_insert():
+ x = [2, 4]
+ insert(x, 1)
+ assert x == [1, 2, 4]
+ x = [2, 4]
+ insert(x, 3)
+ assert x == [2, 3, 4]
+ x = [2, 4]
+ insert(x, 5)
+ assert x == [2, 4, 5]
+ x = [2, 4]
+ insert(x, 2)
+ assert x == [2, 4]
+ x = [2, 4]
+ insert(x, 4)
+ assert x == [2, 4]
+ x = [2, 3, 4]
+ insert(x, 3)
+ assert x == [2, 3, 4]
+ x = []
+ insert(x, 3)
+ assert x == [3]
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from bisect import bisect, insort
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+ bool = lambda x: not not x
+
+hexbinmap = {
+ '0': '0000',
+ '1': '0001',
+ '2': '0010',
+ '3': '0011',
+ '4': '0100',
+ '5': '0101',
+ '6': '0110',
+ '7': '0111',
+ '8': '1000',
+ '9': '1001',
+ 'a': '1010',
+ 'b': '1011',
+ 'c': '1100',
+ 'd': '1101',
+ 'e': '1110',
+ 'f': '1111',
+ 'x': '0000',
+}
+
+chrbinmap = {}
+for n in xrange(256):
+ b = []
+ nn = n
+ for i in xrange(8):
+ if nn & 0x80:
+ b.append('1')
+ else:
+ b.append('0')
+ nn <<= 1
+ chrbinmap[n] = ''.join(b)
+
+
+def to_bitfield_ipv4(ip):
+ ip = ip.split('.')
+ if len(ip) != 4:
+ raise ValueError, "bad address"
+ b = []
+ for i in ip:
+ b.append(chrbinmap[int(i)])
+ return ''.join(b)
+
+def to_bitfield_ipv6(ip):
+ b = ''
+ doublecolon = False
+
+ if ip == '':
+ raise ValueError, "bad address"
+ if ip == '::': # boundary handling
+ ip = ''
+ elif ip[:2] == '::':
+ ip = ip[1:]
+ elif ip[0] == ':':
+ raise ValueError, "bad address"
+ elif ip[-2:] == '::':
+ ip = ip[:-1]
+ elif ip[-1] == ':':
+ raise ValueError, "bad address"
+ for n in ip.split(':'):
+ if n == '': # double-colon
+ if doublecolon:
+ raise ValueError, "bad address"
+ doublecolon = True
+ b += ':'
+ continue
+ if n.find('.') >= 0: # IPv4
+ n = to_bitfield_ipv4(n)
+ b += n + '0'*(32-len(n))
+ continue
+ n = ('x'*(4-len(n))) + n
+ for i in n:
+ b += hexbinmap[i]
+ if doublecolon:
+ pos = b.find(':')
+ b = b[:pos]+('0'*(129-len(b)))+b[pos+1:]
+ if len(b) != 128: # always check size
+ raise ValueError, "bad address"
+ return b
+
+ipv4addrmask = to_bitfield_ipv6('::ffff:0:0')[:96]
+
+class IP_List:
+ def __init__(self):
+ self.ipv4list = []
+ self.ipv6list = []
+
+ def __nonzero__(self):
+ return bool(self.ipv4list or self.ipv6list)
+
+
+ def append(self, ip, depth = 256):
+ if ip.find(':') < 0: # IPv4
+ insort(self.ipv4list,to_bitfield_ipv4(ip)[:depth])
+ else:
+ b = to_bitfield_ipv6(ip)
+ if b.startswith(ipv4addrmask):
+ insort(self.ipv4list,b[96:][:depth-96])
+ else:
+ insort(self.ipv6list,b[:depth])
+
+
+ def includes(self, ip):
+ if not (self.ipv4list or self.ipv6list):
+ return False
+ if ip.find(':') < 0: # IPv4
+ b = to_bitfield_ipv4(ip)
+ else:
+ b = to_bitfield_ipv6(ip)
+ if b.startswith(ipv4addrmask):
+ b = b[96:]
+ if len(b) > 32:
+ l = self.ipv6list
+ else:
+ l = self.ipv4list
+ for map in l[bisect(l,b)-1:]:
+ if b.startswith(map):
+ return True
+ if map > b:
+ return False
+ return False
+
+
+ def read_fieldlist(self, file): # reads a list from a file in the format 'ip/len <whatever>'
+ f = open(file, 'r')
+ while True:
+ line = f.readline()
+ if not line:
+ break
+ line = line.strip().expandtabs()
+ if not line or line[0] == '#':
+ continue
+ try:
+ line, garbage = line.split(' ',1)
+ except:
+ pass
+ try:
+ line, garbage = line.split('#',1)
+ except:
+ pass
+ try:
+ ip, depth = line.split('/')
+ except:
+ ip = line
+ depth = None
+ try:
+ if depth is not None:
+ depth = int(depth)
+ self.append(ip,depth)
+ except:
+ print '*** WARNING *** could not parse IP range: '+line
+ f.close()
+
+
+ def set_intranet_addresses(self):
+ self.append('127.0.0.1',8)
+ self.append('10.0.0.0',8)
+ self.append('172.16.0.0',12)
+ self.append('192.168.0.0',16)
+ self.append('169.254.0.0',16)
+ self.append('::1')
+ self.append('fe80::',16)
+ self.append('fec0::',16)
+
+ def set_ipv4_addresses(self):
+ self.append('::ffff:0:0',96)
+
+def ipv6_to_ipv4(ip):
+ ip = to_bitfield_ipv6(ip)
+ if not ip.startswith(ipv4addrmask):
+ raise ValueError, "not convertible to IPv4"
+ ip = ip[-32:]
+ x = ''
+ for i in range(4):
+ x += str(int(ip[:8],2))
+ if i < 3:
+ x += '.'
+ ip = ip[8:]
+ return x
+
+def to_ipv4(ip):
+ if is_ipv4(ip):
+ _valid_ipv4(ip)
+ return ip
+ return ipv6_to_ipv4(ip)
+
+def is_ipv4(ip):
+ return ip.find(':') < 0
+
+def _valid_ipv4(ip):
+ ip = ip.split('.')
+ if len(ip) != 4:
+ raise ValueError
+ for i in ip:
+ chr(int(i))
+
+def is_valid_ip(ip):
+ try:
+ if not ip:
+ return False
+ if is_ipv4(ip):
+ _valid_ipv4(ip)
+ return True
+ to_bitfield_ipv6(ip)
+ return True
+ except:
+ return False
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from binascii import unhexlify
+
+try:
+ True
+except:
+ True = 1
+ False = 0
+
+
+# parses a list of torrent hashes, in the format of one hash per line in hex format
+
+def parsetorrentlist(filename, parsed):
+ new_parsed = {}
+ added = {}
+ removed = parsed
+ f = open(filename, 'r')
+ while True:
+ l = f.readline()
+ if not l:
+ break
+ l = l.strip()
+ try:
+ if len(l) != 40:
+ raise ValueError, 'bad line'
+ h = unhexlify(l)
+ except:
+ print '*** WARNING *** could not parse line in torrent list: '+l
+ if parsed.has_key(h):
+ del removed[h]
+ else:
+ added[h] = True
+ new_parsed[h] = True
+ f.close()
+ return (new_parsed, added, removed)
+
--- /dev/null
+# Written by John Hoffman
+# see LICENSE.txt for license information
+
+from httplib import HTTPConnection, HTTPSConnection, HTTPException
+from urlparse import urlparse
+from bencode import bdecode
+import socket
+from gzip import GzipFile
+from StringIO import StringIO
+from urllib import quote, unquote
+from __init__ import product_name, version_short
+
+VERSION = product_name+'/'+version_short
+MAX_REDIRECTS = 10
+
+
+class btHTTPcon(HTTPConnection): # attempt to add automatic connection timeout
+ def connect(self):
+ HTTPConnection.connect(self)
+ try:
+ self.sock.settimeout(30)
+ except:
+ pass
+
+class btHTTPScon(HTTPSConnection): # attempt to add automatic connection timeout
+ def connect(self):
+ HTTPSConnection.connect(self)
+ try:
+ self.sock.settimeout(30)
+ except:
+ pass
+
+class urlopen:
+ def __init__(self, url):
+ self.tries = 0
+ self._open(url.strip())
+ self.error_return = None
+
+ def _open(self, url):
+ self.tries += 1
+ if self.tries > MAX_REDIRECTS:
+ raise IOError, ('http error', 500,
+ "Internal Server Error: Redirect Recursion")
+ (scheme, netloc, path, pars, query, fragment) = urlparse(url)
+ if scheme != 'http' and scheme != 'https':
+ raise IOError, ('url error', 'unknown url type', scheme, url)
+ url = path
+ if pars:
+ url += ';'+pars
+ if query:
+ url += '?'+query
+# if fragment:
+ try:
+ if scheme == 'http':
+ self.connection = btHTTPcon(netloc)
+ else:
+ self.connection = btHTTPScon(netloc)
+ self.connection.request('GET', url, None,
+ { 'User-Agent': VERSION,
+ 'Accept-Encoding': 'gzip' } )
+ self.response = self.connection.getresponse()
+ except HTTPException, e:
+ raise IOError, ('http error', str(e))
+ status = self.response.status
+ if status in (301,302):
+ try:
+ self.connection.close()
+ except:
+ pass
+ self._open(self.response.getheader('Location'))
+ return
+ if status != 200:
+ try:
+ data = self._read()
+ d = bdecode(data)
+ if d.has_key('failure reason'):
+ self.error_return = data
+ return
+ except:
+ pass
+ raise IOError, ('http error', status, self.response.reason)
+
+ def read(self):
+ if self.error_return:
+ return self.error_return
+ return self._read()
+
+ def _read(self):
+ data = self.response.read()
+ if self.response.getheader('Content-Encoding','').find('gzip') >= 0:
+ try:
+ compressed = StringIO(data)
+ f = GzipFile(fileobj = compressed)
+ data = f.read()
+ except:
+ raise IOError, ('http error', 'got corrupt response')
+ return data
+
+ def close(self):
+ self.connection.close()
--- /dev/null
+#!/usr/bin/python
+
+import os
+import cgi
+
+from web.http import HTTPResponse
+
+for language in ("de", "en",):
+ if os.environ["HTTP_ACCEPT_LANGUAGE"].startswith(language):
+ break
+
+site = cgi.FieldStorage().getfirst("site") or "index"
+
+sites = { "ipfire.org" : "www.ipfire.org",
+ "www.ipfire.org" : "/%s/%s" % (language, site,),
+ "source.ipfire.org" : "http://www.ipfire.org/%s/source" % language,
+ "tracker.ipfire.org" : "http://www.ipfire.org/%s/tracker" % language,
+ "torrent.ipfire.org" : "http://www.ipfire.org/%s/tracker" % language,
+ "download.ipfire.org" : "http://www.ipfire.org/%s/download" % language,
+ "people.ipfire.org" : "http://wiki.ipfire.org/%s/people/start" % language, }
+
+httpheader = []
+
+try:
+ httpheader.append(("Location", sites[os.environ["SERVER_NAME"]]))
+except KeyError:
+ httpheader.append(("Location", sites["www.ipfire.org"]))
+
+h = HTTPResponse(302, httpheader, None)
+h.execute()
+++ /dev/null
-User-agent: *
-Disallow:
<div id="line1">
%(menu)s
<div id="lang">
- <a href="/%(document_name)s/en" ><img src="/images/en.gif" alt="english" /></a>
- <a href="/%(document_name)s/de" ><img src="/images/de.gif" alt="german" /></a>
+ %(languages)s
</div>
- </div> <!-- Line 1 -->
+ </div>
<div id="line2">
- <h1><span>IPFire</span>.org</h1>
- </div> <!-- Line 2 -->
+ <h1>%(server)s</h1>
+ </div>
<div id="line3">
- <h2>Security today!</h2>
- </div> <!-- Line 3 -->
+ <h2>%(slogan)s</h2>
+ </div>
</div>
</div>
<div id="main">
<div id="main_inner" class="fixed">
- <table>
- <tr>
- <td id="sh-tl"></td>
- <td id="sh-top"></td>
- <td id="sh-tr"></td>
- </tr>
- <tr>
- <td id="sh-lft"></td>
- <td id="no-sh">
-
+ <table>
+ <tr>
+ <td id="sh-tl"></td>
+ <td id="sh-top"></td>
+ <td id="sh-tr"></td>
+ </tr>
+ <tr>
+ <td id="sh-lft"></td>
+ <td id="no-sh">
+ <div id="primaryContent_2columns">
+ <div id="columnA_2columns">
+ %(content)s
+ <br class="clear" />
+ </div>
+ </div>
+ <div id="secondaryContent_2columns">
+ <div id="columnC_2columns">
+ %(sidebar)s
+ <br class="clear" />
+ </div>
+ </div>
+ </td>
+ <td id="sh-rgt"></td>
+ </tr>
+ <tr>
+ <td id="sh-bl"></td>
+ <td id="sh-btn"></td>
+ <td id="sh-br"></td>
+ </tr>
+ </table>
+ </div>
+ </div>
+ <div id="footer" class="fixed2">
+ Copyright © %(year)s IPFire.org. All rights reserved. <a href="/%(lang)s/imprint">Imprint</a>
+ </div>
+ </body>
+</html>
--- /dev/null
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+import os
+import cgi
+import time
+import random
+import simplejson as json
+
+from http import HTTPResponse, WebError
+
+class Data:
+ def __init__(self):
+ self.output = ""
+
+ def w(self, s):
+ self.output += "%s\n" % s
+
+ def __call__(self):
+ return self.output
+
+
+class Json:
+ def __init__(self, file):
+ f = open(file)
+ data = f.read()
+ data = data.replace('\n', '') # Remove all \n
+ data = data.replace('\t', '') # Remove all \t
+ self.json = json.loads(data)
+ f.close()
+
+
+class Page(Data):
+ def include(self, file):
+ f = open(file)
+ output = f.read()
+ f.close()
+ self.w(output % self.data)
+
+ def menu(self):
+ m = Menu(self.langs.current)
+ return m()
+
+ def __init__(self, title, content, sidebar=None):
+ self.output = ""
+ self.langs = Languages()
+ self.data = {"server": os.environ["SERVER_NAME"].replace("ipfire", "<span>ipfire</span>"),
+ "title" : "%s - %s" % (os.environ["SERVER_NAME"], title,),
+ "menu" : self.menu(),
+ "document_name" : title,
+ "lang" : self.langs.current,
+ "languages" : self.langs.menu(title),
+ "year" : time.strftime("%Y"),
+ "slogan" : "Security today!",
+ "content" : content(self.langs.current),
+ "sidebar" : "", }
+ if sidebar:
+ self.data["sidebar"] = sidebar(self.langs.current)
+
+ def __call__(self):
+ try:
+ self.include("template.inc")
+ code = 200
+ except WebError:
+ code = 500
+ h = HTTPResponse(code)
+ h.execute(self.output)
+
+
+class News(Json):
+ def __init__(self, limit=3):
+ Json.__init__(self, "news.json")
+ self.news = self.json.values()
+ if limit:
+ self.news = self.news[:limit]
+ self.news.reverse()
+
+ def html(self, lang="en"):
+ s = ""
+ for item in self.news:
+ for i in ("content", "subject",):
+ if type(item[i]) == type({}):
+ item[i] = item[i][lang]
+ b = Box(item["date"] + " - " + item["subject"], "by %s" % item["author"])
+ b.w(item["content"])
+ s += b()
+ return s
+
+ __call__ = html
+
+ def headlines(self, lang="en"):
+ headlines = []
+ for item in self.news:
+ if type(item["subject"]) == type({}):
+ item["subject"] = item["subject"][lang]
+ headlines.append((item["subject"],))
+ return headlines
+
+
+class Menu(Json):
+ def __init__(self, lang):
+ self.lang = lang
+ Json.__init__(self, "menu.json")
+
+ def __call__(self):
+ s = """<div id="menu"><ul>\n"""
+ for item in self.json.values():
+ item["active"] = ""
+
+ # Grab language
+ if type(item["name"]) == type({}):
+ item["name"] = item["name"][self.lang]
+
+ # Add language attribute to local uris
+ if item["uri"].startswith("/"):
+ item["uri"] = "/%s%s" % (self.lang, item["uri"],)
+
+ #if item["uri"].find(os.environ["SCRIPT_NAME"]):
+ # item["active"] = "class=\"active\""
+
+ s += """<li><a href="%(uri)s" %(active)s>%(name)s</a></li>\n""" % item
+ s += "</ul></div>"
+ return s
+
+
+class Banners(Json):
+ def __init__(self, lang="en"):
+ self.lang = lang
+ Json.__init__(self, "banners.json")
+
+ def random(self):
+ banner = random.choice(self.json.values())
+ return banner
+
+
+class Languages:
+ def __init__(self, doc=""):
+ self.available = []
+
+ for lang in ("de", "en",):
+ self.append(lang,)
+
+ self.current = cgi.FieldStorage().getfirst("lang") or "en"
+
+ def append(self, lang):
+ self.available.append(lang)
+
+ def menu(self, doc):
+ s = ""
+ for lang in self.available:
+ s += """<a href="/%(lang)s/%(doc)s"><img src="/images/%(lang)s.gif" alt="%(lang)s" /></a>""" % \
+ { "lang" : lang, "doc" : doc, }
+ return s
+
+
+class Box(Data):
+ def __init__(self, headline, subtitle=""):
+ Data.__init__(self)
+ self.w("""<div class="post"><h3>%s</h3>""" % (headline,))
+ if subtitle:
+ self.w("""<ul class="post_info"><li class="date">%s</li></ul>""" % (subtitle,))
+
+ def __call__(self):
+ self.w("""<br class="clear" /></div>""")
+ return Data.__call__(self)
+
+
+class Sidebar(Data):
+ def __init__(self, name):
+ Data.__init__(self)
+
+ def content(self, lang):
+ self.w("""<h4>Test Page</h4>
+ <p>Lorem ipsum dolor sit amet, consectetuer sadipscing elitr,
+ sed diam nonumy eirmod tempor invidunt ut labore et dolore magna
+ aliquyam erat, sed diam voluptua. At vero eos et accusam et justo
+ duo dolores et ea rebum.</p>""")
+ banners = Banners()
+ self.w("""<h4>%(title)s</h4><a href="%(link)s" target="_blank">
+ <img src="%(uri)s" /></a>""" % banners.random())
+
+ def __call__(self, lang):
+ self.content(lang)
+ return Data.__call__(self)
+
+
+class Content(Data):
+ def __init__(self, name):
+ Data.__init__(self)
+
+ def content(self):
+ self.w("""<h3>Test Page</h3>
+ <p>Lorem ipsum dolor sit amet, consectetuer sadipscing elitr,
+ sed diam nonumy eirmod tempor invidunt ut labore et dolore magna
+ aliquyam erat, sed diam voluptua. At vero eos et accusam et justo
+ duo dolores et ea rebum. Stet clita kasd gubergren, no sea takimata
+ sanctus est Lorem ipsum dolor sit amet. Lorem ipsum dolor sit amet,
+ consetetur sadipscing elitr, sed diam nonumy eirmod tempor invidunt
+ ut labore et dolore magna aliquyam erat, sed diam voluptua. At vero
+ eos et accusam et justo duo dolores et ea rebum. Stet clita kasd
+ gubergren, no sea takimata sanctus est Lorem ipsum dolor sit amet.
+ Lorem ipsum dolor sit amet, consetetur sadipscing elitr, sed diam
+ nonumy eirmod tempor invidunt ut labore et dolore magna aliquyam erat,
+ sed diam voluptua. At vero eos et accusam et justo duo dolores et ea
+ rebum. Stet clita kasd gubergren, no sea takimata sanctus est Lorem
+ ipsum dolor sit amet.</p>""")
+
+ b = Box("Test box one", "Subtitle of box")
+ b.write("""<p>Duis autem vel eum iriure dolor in hendrerit in vulputate velit
+ esse molestie consequat, vel illum dolore eu feugiat nulla facilisis
+ at vero eros et accumsan et iusto odio dignissim qui blandit praesent
+ luptatum zzril delenit augue duis dolore te feugait nulla facilisi.
+ Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam
+ nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat
+ volutpat.</p>""")
+ self.w(b())
+
+ b = Box("Test box two", "Subtitle of box")
+ b.write("""<p>Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper
+ suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem
+ vel eum iriure dolor in hendrerit in vulputate velit esse molestie
+ consequat, vel illum dolore eu feugiat nulla facilisis at vero eros
+ et accumsan et iusto odio dignissim qui blandit praesent luptatum
+ zzril delenit augue duis dolore te feugait nulla facilisi.</p>""")
+ self.w(b())
+
+ def __call__(self, lang="en"):
+ self.content()
+ return Data.__call__(self)
--- /dev/null
+#!/usr/bin/python
+# -*- coding: utf-8 -*-
+
+code2msg = { 200 : "OK",
+ 302 : "Temporarily Moved",
+ 404 : "Not found",
+ 500 : "Internal Server Error", }
+
+class HTTPResponse:
+ def __init__(self, code, header=None, type="text/html"):
+ self.code = code
+
+ print "Status: %s - %s" % (self.code, code2msg[self.code],)
+ if self.code == 302:
+ print "Pragma: no-cache"
+ if type:
+ print "Content-type: " + type
+ if header:
+ for (key, value,) in header:
+ print "%s: %s" % (key, value,)
+ print
+
+ def execute(self, content=""):
+ if self.code == 200:
+ print content
+
+class WebError(Exception):
+ pass