[sldev] P2P/Squid Web Textures: Enabling Greater Quality Images - draft 2

Kamilion kamilion at gmail.com
Sun Jul 8 03:01:11 PDT 2007


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Okay, here's another idea -- Squid currently supports Store Digests;
Which is a bitmap that lossilly represents the contents of an entire
squid cache, biased to hits. It also supports Client Streams; which
allow other applications to access the squid cache.

It also runs natively on windows built with mingw32.

Technically, it shouldn't be too hard to build a control program that
pulls the store digest from squid, and integrates a torrent tracker,
torrent client, and some form of SuperTracker client that keeps track
of the various "SLSquid" servers.

This solves most of the problem with 'revealed' IP addresses.

We would end up with a meshed network of SLSquids that could transfer
textures between each other in the background.

At that point, the SuperTracker is managing a round robin DNS
containing all of the known SLSquids, which the SL client could be
pointed at.

This same sort of DNS management is commonly used in some larger IRC
networks, where you connect to irc.network.com and you're redirected
to a server appropriate for your geographical region.

This also keeps a central point of authority which would be useful to
block abusive users and/or servers.


- From here, we have a meshed network of interconnected aggregate
caches that serve content between each other on demand with the slower
torrent protocol, while serving content to SL clients via fast HTTP
from the cache. It would also allow later addition of the SL clients
directly downloading full 'packs' of data from the caches, containing
an entire object's worth of data: Say you have a linked object with
163 prims, each prim displaying 5 independent textures, all of them
completely different. That's 815 textures of various sizes; not
including sculpt textures. Add those in to get say, an even 900
textures. Plus this particular object has 20 sounds, and 80
animations, for an even 1000 assets.
Let's say it's a dancefloor.

This entire objectmass can be placed into a single .torrent, with all
1000 of those assets referenced. SLSquids or SLClients could pull down
an entire object relatively quickly.

One of the slowest parts of the torrent protocol is the fact that is
has to discover peers; which we're 'solving' by making the torrent
tracker a peer also, which has the benefit of quickly connecting to
other peers that are also connecting to the torrent tracker's client.
In this way, DHT networks can be formed quickly, and blocks can begin
transferring immediately. Also, since bittorrent supports variable
block sizes, faster block completion can be achived by using small
blocks, since we rely on the fact that we're not transferring massive
amounts of data in each load -- even a object with many assets would
likely be under 10MB, and BT excels at packs of small files.
Object-level torrents would be a huge advantage over single asset
methods, mainly due to the fact that I havn't really seen the viewer
load a partial object; only octree a full object's visible prims away.

Torrent clients also have quite good bandwidth limiting, and a forced
limit in the SL client could easily be chosen by default, such as
3KB/sec upload. Since the SLSquids take care of 90% of the block
transfers, client to client transfers are pretty much just icing on
the cake at that point, and don't really matter either way.


This is a perfect little 3rd party community project that requires
minimal serverside changes, uses mostly unmodified existing
applications (Squid, ctorrent/libtorrent/torrentflux,
bttrack.py/bnbteasytracker/phpbttrkplus/bytemonsoon), and is mostly
glue code sticking the ragtag little group together; in fact, I
wouldn't be surprised it if could be done easily in a dynamic language
like perl, python, php, or ruby.

I'm not sure if we've gone over to asset transfer over HTTP yet, but I
do know it's in the roadmap already, and the sooner we get a handle on
a decent way to pull this off, the more we can gear right into that
roadmap.

It also decreases what I would assume to be a fairly massive bandwidth
and database load on the LL grid which translates to improved quality
of experiance for all; allowing their transactions to deal with assets
less and other systems like search, presence, messaging and scripts to
be more reliable and concurrent, hopefully allowing far more people to
be on the grid at once.

It allows one more thing that could be quite useful; the ability to
use SL texture assets outside of SL by requesting them from a cache.
This would be handy for sites like SLExchange to display images of
products directly from SL, using the images most item creators already
use for vendors and primbox storefronts.

Once again, this is a third party program that *supplements* SL -- it
doesn't need to be part of the client. Everything necessary is already
win32 compatible so we could provide a simple installer package for
windows for "private home network" caches, and linux/bsd/solaris
packages for larger (public?) servers.

- -- Kamilion


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (MingW32)
Comment: http://firegpg.tuxfamily.org

iD8DBQFGkLXA+Hm92PVlrtQRAoGmAKC7jj6vB1/WpCn2uTsUvpS3bNGZCgCg9Umh
czvF42CXSKs2vGiJ0FK9M3A=
=+w5H
-----END PGP SIGNATURE-----


On 7/7/07, Simon Nolan <simon.nolan at gaylifesl.com> wrote:
> All this talk of Squids and Amazon S3 stores and Torrents is nice,
> but why? Each viewer instance already has a texture cache. Instead of
> mucking around with external services, can't our viewers just talk to
> each other and share the textures in our caches?
>
> Why do we need a tracker? If you're in a sim, you already have a list
> of others who have the textures that you need -- though I'm not sure
> about how to go about resolving that list of residents in a sim into
> P2P connections. The only use for external services/caches would be
> for when you're the only person in a sim, and that's usually when I
> *don't* have a problem with textures.
>
> Why are we talking about Torrents? It appears to be suboptimal for
> small files, particularly sculptie textures. Should we consider our
> own p2p protocol, or basing it on something that avoids the ramp-up
> issues (and avoiding "tit for tat") in BitTorrent? (And I think part
> of the reason Torrent keeps coming up is that's what was in Dzonatas'
> original post.)
>
> Since DSL and other internet services are asymmetric, if I go to a
> busy, complex sim, what's to prevent my uplink from becoming
> saturated with texture sharing, lagging my client because it can't
> squeeze in a message to the sim?
>
> Why do we need lossless textures for anything except sculpties? Not
> to sound like I'm saying, "Textures are good enough for me, and dog-
> gone it, they're good enough for you," but really, the textures I
> upload look fine. The only problem I have is not that they're lossy-
> compressed, but that they're not big enough. I can easily see pixels
> on low-resolution textures on large objects, and even some not-so-
> large objects. It has nothing at all to do with compression.
> Honestly, though, I'm not crazy bout lagging my viewer client-side
> with huge textures, regardless of where they're downloaded from.
>
> Dzonatas said that artists want lossless compression for their work,
> but I wonder if they're double-compressing their images on upload. Do
> they save them as JPEG from their image editor, then upload them
> which compresses them again? I only ever upload Targas so that
> there's only one round of lossy compression, and I get excellent
> results.
>
> In all honesty, I'm not sure that P2P and larger/lossless images need
> be tied together. In fact, if we could just get web textures, then
> people would be free to host their own Squid caches with Dijjer
> enabled and off we go. Of course, if someone stops paying their
> hosting fees, their stuff goes buh-bye. LL-hosted textures would then
> benefit from a custom client-side P2P that shares textures between
> residents visiting the same sim.

On 7/7/07, Dzonatas <dzonatas at dzonux.net> wrote:
> I originally jotted down the ideas based on a collection full of talk
> and data. I thought I was pretty darn clear about "subject to change."
>
> Yes, torrents are mentioned but it does not have to be BitTorrent;
> however, it would be easier to use something that already exists just to
> get the proof-of-concept done before we roll our own.
>
> BitTorrent uses connectionless transfers and peer discovery. That by
> itself is the main reason why it is slow to initiate transfers. If it
> kept active connections of peers, it would be much quicker without
> initial transfer time slowdown. There should be an option who stays
> active and who doesn't gets treated like normal BitTorrent client.
> Everybody else without a client uses the http method to get files. This
> can all be combine into a Squid cache.
>
> It would be nice to have one viewer connect directly to another. It
> would be easier to allow these viewers to talk to a Squid to find out
> who has just seen certain textures recently in order to get the textures
> from them, if it is enabled on those viewers. That way, even if a ton of
> people aren't in the same sim can still access the tracker to find the
> texture.
>
> Remember, we can make these things optional.
>
> Why don't we just put an apache server into the viewer, then we can just
> simply http redirect viewers that don't have a texture to viewers that
> do have a texture. =p
>
> I can think of a few websites that would love to host these textures.
>
> Again, if all external hosts go down for some odd reason, I believe I
> did mention about a "fallback" option for the worst-case.
>
> Of course, the sim needs a copy of the texture, especially for sculpties
> and its physics even if it never sends it to a viewer.
>
> --
> Power to Change the Void


More information about the SLDev mailing list