[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /λ/ - 13109

File: 1451510537456.png (24.28 KB, 300x120, ipfs-logo-white.png)


Anyone familiar with the InterPlanetaryFileSystem?

Its a neat protocoll, based on torrents, blockchains and git that aims to replace the internet.

In practice its a nice tool to distribute all kinds of files, websites included.
The difference is that there is no single point of failure. And it is protected against sybill attacks (for those that care).

It is based not on resource location but on resource content. I.e. if I go to www.google.de/InterestingImage.png my browser will always give me whatever is at that location, be it the google logo or necro horseporn.
IPFS uses SHA-256 hashes instead. So the resource links that take the form of easy to remember strings like QmWsmdwbd2NLgyyB7FHLCCd3Qu8F7kXk19eZAdKGyFTdqv will always point me to the file whose hash corresponds to that value. Changing even one bit will yield a completely different hash.

The closest equivalent would be I2P or GnuNet, both extremely slow (as in pre 5,6kb/s slow)

Now the interesting part, how to get started.

First go to:


download the binary for your platform, unpack and move to that directory. (I will assume you use Linux or a Linux-like OS during the following steps)

Inside that directory, make sure that you have set the execution rights for the .sh file, run it as sudo or su to install.

thats it, you have now installed IPFS on your computer.
Except its pretty useless as it is right now.
We need a few more steps.
First: add a repository, that will be where all the files and chunks of files will be saved.

you can do this using the following line in the Bash:

 export IPFS_PATH=/home/YOURNAME/ipfsTestDir/ 

this will create an IPFS repository inside this directory (the directory should exist and you should adapt your name of course).

Then you might want to add a bootstrap, a known address from where your IPFS installation can access the whole IPFS swarm, i.e. everyone else who has IPFS.
Like so:
 ipfs bootstrap add /ip4/ 

this is the access point of ipfspics: https://ipfs.pics/

Now initalize your IPFS:
 ipfs init 

(you can skip the export PATH and bootstrap part, in which case IPFS will use the default /.ipfs directory in your home directory as repository)

Now, to enable others to access the files you have added to IPFS (no we have not yet added anything, be patient pls) start the IPFS daemon in a terminal windows which you do not need:
 ipfs daemon 

(you can see that IPFS is legit rocket science here)

Now you can add files to your repo, for simplicity sake we will add a whole directory instead of file by file:

ipfs add -r /home/YOURNAME/TESTDIR/ > ipfsHashes.txt

As above, replace with your username and whatever nonempty directory you want to add to IPFS.
the ">" is of course a redirect, so that all hashes are neatly saved to a textfile, making the much easier to remember.

Congratulations, you just shared some files over IPFS.

A few neat tools to make your life easier:
The chrome/firefox addon "IPFS Gateway redirect", great for accessing IPFS stuff over your browser, works if the IPFS daemon is running in the background.

The link http://localhost:5001/webui
A more noobfriendly userinterface in your browser.
And of course the website of the project: www.ipfs.io with tons of resources to familiarize yourself further.
(the mailing list is especially nice, everyone is friendly and Juan Bennet, the inventor of IPFS is often active there)

Of note is that IPFS is still in alpha-stage, 0.3 to be precise, so expect bugs.

See next post(s) for a few files I am sharing, mostly books.


First link:

This is the link to the directory where I put all the files I added to the repository.

I only have a 100kb/s upstream pipe, so please be patient.
Of course many of the books can also be gotten from Lainchans volafile room.

I had planned to attach the ipfsHashes.txt that the hashing process produced, so you could access each file individually whithout needing to go over the directory, but unfortunately it seems to be impossible to upload txt files here.


ayyy the rookie
that'll make my browser try to bind to the 8080 port on my own pc m8
you have to give us your internet IP, and that ain't 192.168.1.X either, use http://www.whatsmyip.org


thats not at all how it works... you dont need someones internet facing ip, the server does but it can get it itself


No, >>13135 is right.
You start the IPFS daemon on your own machine, then connect to that, thats why the address points to localhost.
The daemon takes the hash supplied and looks for the file in the IPFS swarm.


If anyone needs an HTTP gateway that works with IPv6:



File: 1451586549583.png (97.08 KB, 158x200, 1450779281444.jpg)

Thanks for the great post! IPFS is really interesting.

The authors believe it to be a replacement for HTTP, but there doesn't seem to be any way to do something similar to a POST in IPFS. Imagine a purely hypothetical image board hosted on IPFS. A read-only board would be trivial, but how could you allow people to submit posts without resorting to a different protocol?



thanks, will add that to the firefox addon.

I think in time this too will be implemented.
You have to remember that IPFS is less than one year old.
Imagine if someone would conceive an image board in 1993.
And yet eventually it has happened.



Good point. Do you know if that's on the roadmap for the IPFS developers?


share the firefox addon link


I dont know, your best guess is the (really good) mailing list.

the name is in the OP post, is it really too much to ask to copy and paste it into the "search for addon" field of firefox?

But since your whin- I mean asked:


idk but if you google "4chan ipfs" there was a github project about a completely distributed chan board. supposedly ipfs is also going to have a js port making it completely in the browser.


you mean this? >>13143


didnt really look into it, but.....
not a true chan but nothing a js and css facelift cant fix, it looks like



Gentlemen, a though is forming in my head. Not yet complete, but a thought nonetheless:

Using the public/private keys that IPFS generates naturally, it might be possible to not only implement strong cryptography on a per-post, per-board and per-thread basis, as well as creating invite-only threads, but also to implement a working messageboard where the client (be it web or whatever) automatically only fetches the latest version of the posts.

Lets say the OP starts a thread, which for simplicity sake would be a simple .md file that he pins to ipfs.
Now he would sign it with his private key, so that everyone who knows his public key can verify that it is actually genuine.
Now of course one could create a new object called "thread" or "board" with its own pub/priv keypair.
Using advanced cryptography each key could be expanded, that is multiplied on a finite field with the "master" key, be it from OP or from the thread or board object, making sure that all subobjects are readable (that is decryptable) by the superobject (we assume a treelike structure)

Now using git we could implement a client that follows the "trail" of digital signatures of a certain object or person (same thing as far as the computer is concerned).
The client would only get the latest object.
Once the client has it, he can edit it, that can mean actually editing a post, if enough rights, or simply appending his own post.
Since afterwards the signing is done with the key of the superobject, everyone else that "subscribes" to the thread or board can also get the latest version, after that edit.

Basically it would be a permanent git branch and git merge everytime someone posts or edits anything, with the client only fetching the latest master.

Its probably extremely incoherent and not completely thought out, but on the other hand its already 1:30 am here in Switzerland and I just wrote that by the seat of my pants.

I will definitely get back to that tomorrow and see if I can get some sort of prototype done.


godspeed lainon. i have barely an idea of what you said but it sounds awesome


That sounds like it would be great if someone could get that off the ground!

As asked in >>13139, do you think this could be done purely using IPFS? It looks like you'd still need something central to manage incoming posts.


Damn, this sounds very similar to a concept I had for a distributed irc/chan protocol. The idea would be to have a nested hierarchy of "channels" created by a process of dynamic grouping. On the highest level of the hierarchy, there are no limitations to what can be posted, and so people might naturally gravitate to creating channels which help to curate or moderate discussion. A channel declaration would specify which posts and channels the client program will included or excluded while browsing.

With this system it would be possible to build up different organizational structures like an image board. To make a board, a channel could be declared that only accepts posts declaring other sub-channels. These "thread channels" could also be required to disallow posts which declare any further channels. At the lowest end, posts in threads would be required to ignore further sub-channels and sub-posts.

The possibilities aren't limited to this however, because we could allow for additive combinations of other boards. The posts and threads of /cyb/, /lam/ and /tech/ could be grouped together forming a traditional image board. However, someone else could curate /cyb/, /tech/ and /drg/. The individual components are separate, but they are grouped together dynamically. This would allow for some experimentation with structure as well, and things completely different from image-boards might arise.

Building on this idea, channels could also subtractively filter out content from certain boards, sort of like having /all/ minus /r/, or something along those lines. This could also apply to public keys, which would allow for a sort of "soft-moderation," where every post exists, but it's up to users to organize what posts, threads, boards, or channels they read. Depending on the channel however, this doesn't need to be implemented, and is only a possibility.

I'm not sure that's exactly what you had in mind, now though. Regardless, I'm very curious to hear more. I would work on something myself but I'm afraid I'm fairly inexperienced with this sort of programming, but I think I'll look into IPFS some more.


thank you very much

the nodes would be the central part.
The PKI would ensure that only the authorized nodes (which might be everyone) can modify the thread or post object.

These set operation on subchannels sound really interesting.

Alright, bad news.
After a night of barely sleeping because I could not get that idea out of my head, nor find a way to implement it, I read up this article:

Turns out (as of now) it is fundamentally impossible.
IPFS can only deal with immutable objects. It is impossible to indicate that an object can or should change in the future, only that it has changed in the past.

Fear not however as hope is not totally lost. At the end of this article it reads:
"In the future, we may have ipns entries work as a git commit chain, with each successive entry pointing back in time to other values."

Turns out I was onto something by thinking about using a git commit chain to model past and present state.

I should probably buff up my idea a bit and then mention it in the official mailing list.

I will see...


In the meantime, have an article that explains what IPFS does behind the scenes using "normal" language:



>"In the future, we may have ipns entries work as a git commit chain, with each successive entry pointing back in time to other values."
That's an awesome idea with weird side effects, like how you could get the whole history of a website as metadata.
It doesn't solve the main problem though: you still have a single resource to update.

I wonder if you couldn't get around this by having an ipns address for every mutable page you add, it would get us back to an http style addressing of mutable content while still allowing immutable content to be served under ipfs.


>I wonder if you couldn't get around this by having an ipns address for every mutable page you add

I was thinking alon the lines of creating a nodeId for each object of a message board, and thus automatically having an IPNS object too.

Im still stuck at figuring out some parts however.
For example lets say I fetch the IPNS object of a board (as in: the whole board) which corresponds to www.lainchan.org today.
I mutate it by appending my post on one board.
Now however the node from which I got that object in the first place needs to know about this change, then it can still reject it of course, but its the knowing part that has me blocked right now.

One possibility would be to use the IPFS daemon to scan in the swarm for all objects that have the same git object in their past history and automatically fetch the commit chain up to the current state.

Another, probably less workable way would be to access the merkle DAG. Since this chunks big objects into a tree or graph of smaller ones based on their hashes, a big enough object that has been modified by appending data to it, would still have some parts of its original mDAG.
One could track these and fetch the new parts, updating the local object.

This opens problems however: it only works for appending changes and objects that are big enough to be cut apart into more than one chunk.

Imagine that one object would be the post "Lol so tru" and a new post "XD so funneh".
The algorithm working behind the scenes would probably create only one chunk and thus only one hash for both posts. The original hash would be lost.

Not to mention what would happen if an object is modified by deleting parts of it or simply changing a few bits as would be the case in a collaborative picture editor.

Problems, problems...


I just put a few videos of talks explaining IPFS on IPFS.
Sure they are available on youtube right now, but watching them there is not challenging enough ;) :



This path seems to be broken for me, and I've confirmed that IPFS is working on my local and remote machines. Also, the gateway successfully lists the contents of the path, but the links within the listing are broken.


its probably bc I am the only one that has the files at the moment, meaning no one else has downloaded them yet, and right now I am offline.
Well my Laptop that is. Posting this from my tablett.

Will probably resume around 9am this morning (around 7 hours from now)


I'm working on a thing in common lisp which pins ipfs paths, hashes them with sha256, and associates the resulting hash with its ipfs path along with a set of tags in a database. You can also search the database with a list of tags and get back a set of hashes, which correspond to IPFS paths.

My end goal is to have a backend with a json interface and a web-based frontend which looks and functions similar to a booru. I also want to archive mirrors to a given file from other protocols (http, ftp, gopher, etc), and be able to archive directories (you can't directly sha256-sum a directory, so the archive-ipfs-url function I wrote returns failure if you try to add one).

The only implementation it runs on is SBCL because of certain impl-specific functions. The other dependencies are listed in the package file. I don't think the system will load correctly yet, so you'll have to eval each file to test the functions. My next step is to make the existing functions accessible with a TCP socket and a JSON interface. I also need to write a spec of some sort...

I added a snapshot .tgz of the repo, here's the mdag:


Some of my retrowave collection




Something I implemented privagtely (for someone else as proprietary software.)

Just bind ipns to a manifest file with a doubly linked list. Each link can contain the diff of the object that it references.

You still have to manage the input of new posts, but that can easily be done by providing a ui that registers other nodes as "content providers". Then they can pull modifications to the chain and accept them depending on the criterion. Similarly to a federated system.

Each "suggested" addition by content nodes can be signed. The node that represents the point of coallesence doesn't require to accept the addition, but other nodes can choose to pick up on "channels" and feature it there.

Think of IPFS as TCP/IP, we just need to build a protocol on top of it.

IF anyone is willing to work on this (outside of what I can't release as intellectual property) I'm willing to do it.


For some reason I could not access to working hashs using my local daemon.

I have forwarded my port TCP 4001 and
even checked at http://canyouseeme.org

Anyone else having this problem?


Cool stuff!


try using a public gateway to see if you can access them over it.
Otherwise, try
 ipfs daemon --mount
And make sure you have set the correct access rights



Thanks for the suggestion,

I have fixed It

For future reference, be sure to check if these ports have direct access to the internet.

netstat -nlp | grep ipfs | grep -v


I get a directory listing, but can't access any of the videos. Has nobody pinned these?


Im the one who posted them, but currently on the road, thus my pc is offline. Try again tomorrow morning (Utc)


I remember seeing an IPFS thread on 4chan awhile ago. IPFS doesn't support inplace pinning, meaning that files must be copied to blocks in the home folder for storage, effectively duplicating them. Someone recommended a file system that wouldn't do this, and a short explanation that it somehow kep an index (journal? idk what its called) of files in blocks, just like ipfs, allowing the original files contents to be used as the ipfs block aswell, making the system believe there were two files although they were reading off the same disk sectors. Does anyone know what file system this is?



ZFS can do it already, and Btrfs support is being worked on. Apparently it requires a lot of RAM, though.

I still think the best option would be for IPFS to support "detached" storage of files, but only on write-only file systems or ones with extended attributes and refuse to serve files without +i write protection. The problem is that a user may inadvertently modify the file, thus changing its hash. The server _could_ respond to some sort of inotify hook and rehash the file, but this is expensive, would break links and modifying it is probably not what the user wanted to do in the first place. It's better to try to prevent it to whatever extent is possible.

Another issue is that IPFS currently divides the files into blocks before putting them in storage, so the files themselves are not present on the file system. This means a simple symlink is out of the question. You could symlink into /ipfs if you have it mounted, though.


I believe it was ZFS. After further research this seems to be considered de-duplication and looks to be very system heavy.


I tried to lookup /ipns/gindex.dynu.com via the ipfs.io gateway and it tried to serve me a Windows executable instead of the regular HTML document. I aborted the download of course, but when I later tried to resolve the IPNS to see what was up, it just timed out. Does anyone know what happened? Did the gindex dude go skiddie or what? The last copy I know of is still available at /ipfs/QmNgzvC1Y5dh5pQvPfZpZUkoHuB6Z7xwobo5Rv19nZcwA8, but I wanted to see if there were any updates.



these guys are working on using a blockchain to support 'identies' within ipfs. Effectively, a
comment is tied to an image (or any other kind of file, obviously) as metadata, signed off by
the user, and timestamped by the blockchain. I think you would then look up the image in the
blockchain, and find all corresponding comments associated with it. (and since they're
decoupled, a later check on the same hash would still pull up new comments)

I haven't looked into it much but it seems like it'd very easily transition into a chan. The main
question is maintaining temporality, since obviously one of the key features of the blockchain
is to maintain persistent records.


What if someone post CP?
Can we delete the CP?
It's a legal issue.


if its running a pub/sub model, then you can easily remove the cp and push out the new page to all subscribers
however, it'd still ofc be there in the blockchain history
but that's not much different from say, hosting an archive of a chan that once posted cp


You mean, the picture would still be there, or the picture's hash would still be there?


well, it'd be the picture in the form of a series of concatenated hashes
and so you could just regenerate the image over the ipfs protocol, same as you would any other file

the way I'm imagining it is that you'd take the mine-lab's use of blockchain to keep metadata for a given file, and instead have that metadata be individual posts. The user would go to a thread's hash with ipfs, and then go to the blockchain to get all the current posts (each post simply stored as concatenated hashes, and regenerated over ipfs to fully compose the page). Each post then adds to chain, and I suppose a thread's post-limit could simply be the size of a particular block.

However, I'm not sure how you would go about replicating page drop-off, as in when a thread falls off the site.


A collection of programming/computing books (mostly). Local repo, so grab them before the internet becomes soykaf on my end. I'll be mirroring this on a vps later, but for now.


What we're leaving out is what happens when two ipfs nodes (referenced by ipns) point to each other?

I've tried it before, and you'll have to get used to batch message dispatching over stream/session based activity, but it's a good way to federate across servers.




Health check.
If it's a directory, I only check if the directory loads.

/ipfs/QmdDKvbEBePMuVNDfcVou6uXdp71S9KAQQDoAxySpToULF works

/ipfs/QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxdXBFca7D/example#/ipfs/QmQwAP9vFjbCtKvD8RkJdCvPHqLQjZfW7Mqbbqx18zd8j7/ipns/readme.md works

/ipfs/QmZ6zUcsYaPteKHebHSANwEhPrvYT6myrxGGBKhVALhG8J works

/ipfs/QmUMhJdunCJwvzGMg1kamyppA1S4ZPWRW4CCcuZdTGoj4W is ded
/ipfs/Qmbxc8cZUnU9hCerQEk5ZHY8mp1UBpQuPpJrSD9seZ5vF3 is ded
/ipfs/QmeDvxhkn6c58amFA6RQzVCZGJa5tQaRoUuBBZ7dcuFtXq is ded

/ipfs/QmPscEBkuAsmnxRcdtMK6LBB5BxQmjPMGsARUX2NfchVHD is ded


How can stuff on ipfs be dead? I thought making things stay forever was its only purpose?


The same way torrents die. Network storage isn't free, lainon, and IPFS isn't magic. In order for an IPFS link to continue living, one or more IPFS users must have pinned it, and one or more of them must be online.

The way they market themselves as "the permanent web" is really disingenuous imo.


>In order for an IPFS link to continue living, one or more IPFS users must have pinned it, and one or more of them must be online.
Oh? I thought ipfs wanted to provide an egalitarian file storage DHT.
Thanks for clearing that up, anon.


I'm confused now. Is that not what it is?


My idea of an egalitarian file storage DHT is of one that distributes the files across the DHT without any user interaction (for example by storing files on nodes whose ids are the closest to the files' hashes).
From >>15807's explanation it seems like it doesn't work like that. Users have to actively choose which files they want to share.


Ah, okay. We could probably try using the ipfs library to create something similar to what you describe, but I suspect due to people indexing files with caution thrown to the wind, that it wouldn't work well. At the very least, it'd be quite easy to DDOS.


File: 1461203272220.png (1.13 MB, 200x103, how-cute.png)

> In order for an IPFS link to continue living, one or more IPFS users must have pinned it, and one or more of them must be online.

Rereading the BitSwap Protocol now, if I'm reading it correctly, then I'm pretty damn sure you're completely wrong.

IPFS operates on _blocks_ , not files. Files are composed of blocks, and blocks are referenced by their hashes. When you request a file object, what you're actually requesting a series of blocks (from anywhere on the network), which you'll reconstruct at your own machine. Same as bittorrent.
That list of blocks is stored in the distributed hash table (DHT), pieces of which can be found on ipfs nodes (so when you request [the hash of] a file, you bounce around until you find someone who knows what the file object's merkle tree actually looks like. Then you start requesting the blocks composing it.)

And of course, the point of using a DHT is that there is no SPOF (and you can store it efficiently); You will be hard pressed to find a file's structure-data that can't be found _somewhere_ on the network (and this would become exponentially more difficult as the network grows)

Importantly, unlike bittorrent, there is no _tracker_ , that segregates _seeding_ by file object. Bittorrent limits who you can gather blocks from to the people attached to the particular tracker, for that file object, as the Bittorrent DHT is limited to only storing peer-metadata (which, if I understand correctly, is what you get out of your magnet-links)
The reason you can't download a file with no seeders on it, using the Bittorrent protocol, is because the DHT only stores metadata for _single files_.

IPFS removes this limitation, as the IPFS DHT is global. When it requests a block from the network, _it does not ask where that block came from_. In other words, you can gather blocks from _people who never saw the file_.

When you 'publish' a file, you are simply adding the file to the distributed hash table, and getting a hash that you can hand off to people. If you 'pin' a file (the closest thing to hosting it), all you're doing is telling your machine not to delete it during routine garbage collection.
But that hash you pass around, it's got nothing to do with you. It doesn't matter if you still have the file, or not. As long as the necessary _blocks_ exist on the network, the file can be reconstructed. So the main question to file permanence is the amount of _blocks_ existing on the network, and the granularity of those blocks (if the block composes the entire file... it wouldn't be very useful. But if every block was a single ascii character, you could reconstruct any plaintext file with very few nodes with file data on the network)

If I understand it correctly, this would also imply that given a large enough network, you could 'publish' a file without ever actually putting that file on an ipfs node. If all the 'blocks' can be found, then the whole file can be constructed.


File: 1461203831405-0.png (288.46 KB, 200x145, IPFS-BitSwap.png)

File: 1461203831405-1.png (329.79 KB, 193x200, IPFS-File-Object_List.png)

File: 1461203831405-2.png (245.55 KB, 200x157, IPFS-File-Object_Tree.png)

Also, if I understand it correctly, what the block-based structure _should_ mean is something like this:

Lets say you request a png, which would be divided into blocks
with some of these blocks being nothing but transparent
Those transparent blocks would be the same for any png with enough transparency
So for those blocks, you'll end up pulling it from just about anyone with a sufficiently transparent png on their computer
the only pieces that could actually be _dependent_ on the original png existing on the network are those wholly unique to that png

and with more people on the network, the likelihood of such 'unique' blocks existing decreases, and thus such 'originality' dependencies decrease.

And now for some chunks from the whitepaper, because no one seems to have actually read it. It's only 11 pages though.


File: 1461203878087-0.png (105.21 KB, 176x200, IPFS-Object-Graph.png)

File: 1461203878087-1.png (306.17 KB, 200x165, IPFS-Object-Merkle-DAG.png)

File: 1461203878087-2.png (208.36 KB, 200x200, ipfs-p2p-file-system.pdf)


>If I understand it correctly, this would also imply that given a large enough network, you could 'publish' a file without ever actually putting that file on an ipfs node. If all the 'blocks' can be found, then the whole file can be constructed.
That probably wouldn't work because you would have hash collision before that meaning you would have multiple blocks that have the same hash.


should depend on the granularity of the blocks, and the originality of the file

ie a fully transparent png should be retrievable pretty quickly (in terms of network lifespan) even if such a particular png never actually existed on the network
assuming the blocks are small enough in scale to support this.

obviously at file-level granularity, this would be entirely impossible. I'm not how big blocks are actually meant to be; the whitepaper only notes that 'large' files would be naturally broken down, and I'm not sure the units of the "size" variable in the examples.

but yeah its just a theoretical implication, not one that I expect to occur in practice


The hashes have to be smaller than the actual blocks, as there would be no benefit to addressing them through hashes otherwise.
Because of the pigeonhole principle[1] (big blocks -> small hashes) there would have to be hash collisions before the point where you don't need to upload any part of a file (won't ever happen in reality, though, because hashes are still enormous).
The example with the transparent png you mentioned probably already works right now (assuming the transparent layer is stored in a continuous section of the file).

[1] https://en.wikipedia.org/wiki/Pigeonhole_principle


>Users have to actively choose which files they want to share.
Nope, you have to actively choose which files to share _indefinitely_. Your IPFS cache is automatically shared, so files you've visited recently are available to anyone, even if the original node that shared it goes down. Theoretically, popular files would gain enough momentum to survive indefinitely. In this last sense, it's a bit like torrents, with the difference being that IPFS is content-addressed, which shifts the focus from "get the file referenced by x" to "get the file x". This makes the act of you choosing to share the file indefinitely completely transparent to your peers, who will always use the same link to get the file. Content-addressing also pins the links to a specific version of the file, so it also prevents having authors decide to edit their articles on a whim and all that jazz. This is what they mean with "permanent".

That's at least how it would work if people actually used their own IPFS client. As it is now, most people seem to be using the ipfs.io gateway, meaning files are never pinned, and only stored in the gateway's cache (which doesn't usually last particularly long, since so many people are using it).

The go-ipfs client also has a longstanding and notorious bug that causes the daemon to hang and refuse any incoming connections when it's been running for a while, meaning that your datastore and cache become unavailable for everyone else on the network, essentially causing the links to die.

And many people use the IPNS name system, which at least used to require you to manually (or via a cronjob) republish your IPNS binding every 48 hours, or the network will forget it. A lot of early adopters (like the /g/-index guy) didn't realize this, causing old IPNS links to flat out die (despite other people pinning the IPFS resource they pointed to). If there's anything I really dislike about IPFS, it would be this. I guess it could be fixed by clients retaining a cache of previously queried IPNS resources or something, though.


My primary point is just that, if I understand IPFS correctly, this can be a possibility. Not that it's how most files would be delivered.

However, thinking about it a little more, there is also the multihashing aspect of ipfs. Where every hash is prepended with a label identifying the hash used.
Which I believe would mean we now have different hash spaces, where a sha1 hash cannot collide with a tiger hash.
And thus, we have theoretically, an infinite domain. If we start having too many blocks hashed with sha1 that collision becomes too likely, then we simply start hashing with tiger instead. And if there's too many in tiger, we switch to sha256, and so on. Until ofc we run out of vaild hashing functions

And presumably, you could do something like hash bucketing for collisions, but instead you iterate the hash label. sha1 collision means you instead use sha1.1; still using sha1 hashing function, just labeled differently by multihash. And if it manages to collide on sha1.1 ... move to sha1.2
Though I have a strong feeling this kind of iteration is blatantly flawed, but I don't know why
One immediate issue I can think of is that I'm not sure how you would tell, on a new block upload, that it was a collision and not a duplicate. Though I suppose that issue must occur with any hash storage, so it's either got to be a solved problem or an ignored problem (I imagine its just: if its reasonably possible, just increase the hash length until its reasonably impossible)


The number of blocks needed to experience a hash collision for a properly made hash function is greater than the number of atoms on the planet.

SHA1, however, broken and should be abandoned as soon as possible. There is a 2^63 attack(i.e. right now the NSA could maybe generate a collision, but the cost would be quite high) on SHA1.

SHA2-256, SHA3(pick your favourite finalist), or some other non-broken hash should be used for security reasons, but as far as accidental collisions go, there is pretty much no possibility of "filling up" a DHT.


IPFS listens on a bunch of open ports, but they get blocked because I have default incoming connections blocked. Is it a requirement to allow all incoming ports on ipv6?


blocked in my firewall*


30s of googling later, and the answer seems to be to forward port 4001 TCP. You can also configure for a different port in your .ipfs/config file.


I have my swarm port forwarded, and unblocked in my firewall, but people still have trouble downloading my files. I am not talking about forwarding ports, because the IPv6 ports probably do not need to be forwarded. I am talking about them being blocked in my firewall, since I can't predict what ports will be used.


Here's the file in question: QmVNFUBaNVKn2EaGMhnhtUcP54aJH8pTjUdvLu5ofQuzzL

When I try to download it to my VPS, I get an error that it was cut off unexpectedly, and then the daemon crashes.


File: 1462386208174.png (79.17 KB, 200x127, ClipboardImage.png)

Be hyped guys, js-ipfs is almost ready!

Don't know what to tell ya. I think the the ipfs irc would be a better place for your issues. Or their issue tracker if that fails.


Does the download fail for you? I asked once on #ipfs on freenode about my problems, but didn't get a response.




Btw, with glop.me, you can upload files <10MiB to the IPFS network. Remember though that if you wanna benefit from IPFS (namely not relying on a single server to keep you shit up), you got to get other people to pin it.


Here's my problem with IPFS. I have a feeling I've just missed the already implemented solution but it comes from a UX perspective.

The current method for serving resources over the net is URI based. The URI points to it, without saying anything about the content. This all goes without saying. The issue is that this prevents the user from needing to deal with any kind of versioning. Google.com is always going to be google.com, no matter the version that the file is.

What I feel needs to be implemented is file clustering. A pub/priv key pair generated for a group of files signs each of the original hashes with the private key and distributes them in an file, with the public key attached. This would act as an alias. The issue being, all aliases would need to be able to be updated without violating the address listed. At the end of the day, we still a need IPFS-DNS of some sort.


I prefer object storage.

I run a small CEPH cluster at home. Best part is how I can just add random drives and it just works.


It's funny how that tiny bit of mutability allows for a lot to change under the hood.

I'll put something together here soon to do ipns indexing a la directory listings.


Hey kiddos, want some old news
>HT leaks from two years ago



Thanks m8


QmYLUfjCAESp38rYozixrDCuLCPSKwR21S9kCzPHYAzZ55 lainzine01.pdf
QmeTfzJ9zfrTGcLxy54LRW6zkYfXXTZDiix727aYyvY5zL lainzine02.pdf
QmWH3NYaJHuMD2Az9ZMPGVc39W1FRLG8fCGTewBZTDQGaN lainzine/lainzine03.pdf
QmRm2vUiJjR6jPj1FmJsF13hPZcEhmbftB3GcFHLw8xyq8 lainzine


I found this old thread in /tech/
It contains some hashes, maybe it contains some anons that can get involved in this thread.


I'm pinning some files just across my local network, and often it works ok but then with larger files (this time it was 200MB) I'll get this error where the daemon exits in a hugely verbose manner, talking about its various goroutines but never actually saying "error". In the screen from which I made the pin command, it's way less verbose and just says EOF, presumably because the daemon crashed.

Anyone have anything similar or any advice? Given that it's not even going through the internet, but just my router, it always goes really strong at the start and then just falls off abruptly.


y'all know about https://ipfs.niles.xyz/ right


is that yours, the scraper? if you could point it at the #ipfs logs you'd probably get a treasure trove (maybe too much, actually)


Isn't IPFS development pretty dead?


File: 1472746024532.png (41.95 KB, 200x86, Screenshot_20160901_110423.png)

It had a spurt around the new year but otherwise it's been pretty consistent. I think most of the work is on peripherals now.

Certainly more active than zeronet, ipfs' only competitor AFAIK.


QmaZVSEjpRrC4V88kEkYffVTd7Co6KY27LeQnQ1rCh3oFJ Gentoomen Library

Testing stability of massive amounts of blocks here.


I was told the /g/entoomen library was actually filled to the brim with viruses. Do you know of any alternatives of that immense collection?


Downloaded it a few years ago, had exactly zero viruses or warnings.


Last time I had 4 or 5.
Nothing serious and windows only. I have been told there is some CP in it, but havn't found it.


I think one folder was given a virus, but most of the stuff in it is very dated in the first place, anyway. It's easier to get books these days, too, considering where we are.


there's a full working mirror of xkcd available at /ipfs/QmXzDGjRT7McpuLHfRP42ST6bZbjX2KvDGxAZ68gXFdbBz/xkcd.com/index.html

it's current as of Monday and weighs in at 166MB


So, I have read the IPFS draft paper, and I wonder, in case I upload a dickpick to IPFS, will I ever ever be able to remove it from there?

Also does it all mean that we will be able to have chans without old threads dying due to being pushed out of the catalog?(I know we could have that today, but it would kill the servers storage capacity)


You're missing the point it seems. Your dick pic can be pinned by anyone, in whichcase it'll be available so long as any one client retains that file. The nthat hash will remain valid.

And for old threads, so long as one consumer of that data have access to old threads and pins that, then no, that thread will not die.


Well I got it, but I was more asking out of concern.

And the threads question was actually more about thinking how nice IPFS would be as an upgrade to the web.


How likely is it that data gets "lost" from ipfs? E.g. nobody clicks a link for a long time and eventually nobody has that page anymore, so its gone


Depends on the file and its popularity.
I assume that a file like meganFoxNudesLeaked.zip will be available much longer than MyBoringHomepage.html


I think IPFS is more than just programming and it should be moved to /sci.

If at least to populate /sci more.


Disagree completely.
its a software in development with no direct connection to science on itself.
Its exactly on the right board


so, is this in actual use anywhere of note, or planned for such use, now?


It's way to early.


Its like asking if HTML is used a lot in 1993


you are describing usenet


openbazaar runs an IPFS node internally


Does 'everything is a file' apply more to IPFS than it does to the current internet?


I want to try this P2P idea with my website because Neocities support it



No need to answer anymore, I already asked on #ipfs IRC and the creator of it said 'yes'.

Furthermore he added 'Everything is a hash chain.'


This is a really interesting initiative. But I have a couple of perplexities.

This is a bit worrying. To my understanding, each node locally caches part of the net's contents, possibly in redundancy with many other nodes. The cache has to be managed with some policy, maybe some variation of LRU or LFU, but in all there is little control over how this is done. The only way to be 100% sure some content won't disappear from public availability is to pin it locally... which basically means transforming your node into a server.

Also, I tried it out today and it's really, really bloody slow. Seconds to resolve an address, whew. I wonder if this sort of protocol could ever achieve acceptable performance without major infrastructural updates, e.g. replacing ADSL with SDSL as consumer standard. And I wonder if such a transformation would ever be realistically possible, politically speaking.


You have to think in a larger scale I think.
Right now you are thinking of IPFS as an application like Napster or Kazaa.
In fact it is a protocol. Imagine if (we idealize here) all applications were built on top of IPFS:
Your WoW updater, Steam, Music Rips, html files, software releases through Git... they all would operate in one huge swarm. No data duplication at all. No risk of malicious file changes at all. Right now most people use it for filesharing and filesharing only, so of course performance is not that great (and again: the protocol is very, very young. Give it some time).

The file availability today is also a problem and a much bigger one than with IPFS. If the server does not have your file anymore, good luck finding it at all. If in IPFS the server (i.e. a node with above average upstream bandwith) does not have your file anymore, any of the 1000s of other nodes may still have it, and your computer wont even realize that the server is gone.

Furtehrmore many peers can distribute a file faster than few servers, even with limited bandwith. There is a paper out there somewhere that prooves that, no idea how it was called.


Of course, data permanence is better with a p2p swarm than a centralized server. It's just that the IPFS people really stress a lot this idea of data never ever disappearing, while that is not really the case. To make a concrete parallel, I think we've all gone through the pain of looking up a torrent file, only to find it is very old and nobody really seeds it anymore. Smart automated cache management (possibly aware of a file's availability on the swarm at large) could alleviate this, but not solve completely. I'd be quite curious to hear their ideas in this regard, the cache management policy sounds like a fundamental pivot of the architecture.

At any rate, IPFS is an application level protocol geared towards content delivery. In fact, that's another observation I wanted to make, you can't really think of revolutionizing the Internet's infrastructure as is required (IMHO, of course) by switching to it from HTTP without also replacing all other services with p2p equivalents. The current state of widespread client-server architecture is not just based on the needs of content delivery, there are also computational requirements as well as needs for privacy (e.g. I don't want my bank's database to be distributed around the planet, but I'd still like a secure frontend to interface to it). It can't be enough to replace HTTP to create a distributed Internet.


You're completely right. If /sci/ can't keep itself together without artificial threads it should be deleted anyway.


Imagine the following scenario:
2 torrents with almost the same files who are partially dead, i.e. some leechers with a certain % of file completion but no true seeder is left.

With "normal" bittorrent, even if both torrents would complement each other, they would forever stay incomplete because they are simply not aware of each other. With IPFS the missing files can be pulled from somewhere else, as long as they are indeed the same files (determined by the SHA1-hash).

For the second point:
To have access to your banks database, it would have to be fed into IPFS, and unencrypted too.
It could actually be a great idea to feed it to IPFS in encrypted form, you would have the availability of P2P with the security of C2S.
Yes, that is a stupid example.
In reality the DB would never enter the IPFS, however the html file of the bank with access to the DB would. That means that you could access your online bank account even if the (web)server of your bank is down, as long as their Database server is still responding.


Just put ipfs on my backup server. Once I get go running I'll put it on my raspi maybe. Throw me something useful to "seed".


Here's something:
> IPFS archiving and downloading is now supported by all web sites on Neocities.



ipfs add -wr --pin Video >> hashes.ipfs
860.15 MB / 677.36 GB [>-------------------------------------------------------------------] 0.12% 1d22h36m41s


I just want to make sure you're aware of this
and the issue it solves
It's past the due date
But I expect them to finish it relatively soon.


I want 875 bad. Is there some tool that can help me auto rename media files according to specific patterns?


>help me auto rename media files according to specific patterns?
I'm not sure how you mean, like renaming files with regex or do you mean more like detecting mime/file type based on the file content?


Both because the meta data is not guaranteed to be present. I'm mostly looking for a name for such types of programs so I can google them.

Basically I don't want to look at every file name pattern I have and write regex to for each to switch it to my convention.

Also are there any standard naming conventions?


have everyone hold a piece of the board and have a gpg-encrypted command file that only admins/mods can upload into the cloud?


or even better, a signature-verified git-like revision history, that gpg-encrypted stuff could work for other things, maybe.



This is what I do.

Push archives of posts (this is for another application) as a static site, All static posts are included in a IPNS hash as a json data structure.

New posts use a CORS enabled api. The api includes 3 functions.

/api/get/<thread>, or /api/list/
and /api/post/. Posting is reflected after an ipns push. Can be spaced apart by adding time to the crontab for each ipns publish.

To get around the single point of failure, I created simple api pass through that acts as proxies to the origin point. I was thinking of adding openpgp.js to ensure that posts are encrypted with the originating IPNS node's public key..

The other method of doing this is by doing batch messaging and federation. Each federated node messages every other node in batches. sjcl's crypto library would be better for this. Either way, batch processing would allow for a large portion of data to be passed around to every other federated node based on disposition and directionality.

So it's basically instituting a "wire" likea newsfeed, that all interested nodes can pick up and publish themselves. all they have to do is pick up the hashes, pin them, and update their internal model of their 'boards'.


Forgot to link:


It's the demo of the SJCL (Stanford's Javascript Crypto Library). The wonderful thing about IPFS is that it's content hashable, so a website "distribution" of core js libs, css, image, and html, will be verifiable rather easily. And as I understand it, IPFS gateway (/ipfs/<hash> links) are all CORS enabled. Allowing you to know resources won't be switched from under you.


File: 1481556550390.png (16.1 KB, 200x92, Untitled.png)

For filetype detection use the Unix `file` command, it tries its best to determine what kind of file you're pointing it to. Read the manual too `man file`, you can feed it a list of files and have it output a list of files with different formatting options which you can then use yourself in a script or something.

If your `rename` tool supports regex you could use that but not all of them do, in which case you could use `perl`, `sed`, or maybe even `find` to form a list of `mv` commands and execute it when you're done.

Your best bet is to write a script that utilizes these tools. Scan a directory for files, for each file, figure out its type, rename it accordingly. Perl may be best for this but if you already know a scripting or shell language you should use that.

There may be a better suggestion but I'm not sure what your specific scenario is.

>Also are there any standard naming conventions?

For arbitrary files? I'm not sure, in the case of IPFS though, it doesn't matter, if I download something with a name from someone I can just rename it, inside or outside of ipfs.

See pic related, I add a file with the -w flag which preserves its name, make a directory structure in the IPFS mfs, "copy" the file directly into it with my own name, then get the hash for that new directory which people can access if they want the same file with a different name, everyone is still hosting the original file though because the hash hasn't changed, only the parent hash which only contained metadata. Notice that /tmp/example doesn't actually exist on my filesystem either, it's inside of IPFS, I could mount it though if I wanted.

For the filesystem example I move outside of ipfs into my host filesystem, I get the file raw so it has no name (ignoring the parent hash which would give me a named file if I did `ipfs get ipfs get Qmf4w9do7ywaWAHEJifcy7YW9p4LgLH2AJnRJtExfoPnTJ/1391422772749.png`), run `file` against it to see what kind of file it is, then just rename it.

Sorry if that's not what you're asking about, I'm having some trouble understanding what you're after.


File: 1481769267401.png (61.01 KB, 171x200, mami with interrogation arrows over her head.png)

Newbie here, how can I make this permanent, if at all?


File: 1481780505703.png (31.06 KB, 200x200, 1460362293660.jpg)

You could try uploading them to http://ipfs.pics which will host them for you or you can ask other people to mirror them for you on their ipfs node(s). If I download those files from you while your daemon is online, I'll also be hosting them until the next time my daemon does a garbage collection, if I pin them I'll be have them forever which means they'll be available anytime my daemon is online too so only one of us has to be online at any one time.

There's some public pin services but I don't remember any of them. In the future there will be Filecoin which will let you donate space and bandwidth to earn a currency that you can exchange for space and bandwidth from other nodes. I also saw someone working on a for-pay pinning service on github that hosts things via IPFS on amazon.


Heh, typical rookie mistake. is the loopback address in your system, it refers to your computer. If I click the link, my browser will be querying my computer at port 8080.
192.168.1.x is another common mistake, if you look up your address with ifconfig
192.168.1.x is the subnet address space in your LAN, which is translated by the router to your actual global ip. If you gave me my browser would be trying to find a device in my LAN.
curl ifconfig.co

to se your global IP address.


That's the way it works though, see >>13133 made the same mistake as you.


I set this all the way up but my soykafbag isp wont let me forward 4001, can i change it somehow and still be visible to the swarm?


Change the port in the sawrm sub-section of the addresses section of ~/.ipfs/config
For example change
"/ip4/", to "/ip4/*whatever port you can forward*",

I think you can use `ipfs config` to do that too but I just edit the config file directly.


oh soykaf I think I made that reply as well
I should learn how IPFS works.


OK one more stupid question.
In ipfs swarm peers I see people but when I go into webui through a tunnel to my server ie ssh xxxx - L xxxx:localhost:5001 it won't show anything, I think it's because my ipfs_path is not default and it's not setting the variable at login, is there a config setting to point the webui in the right place permanently?


You're not supposed to connect directly to the other person's daemon. You should run your own daemon and connect to it preferably using an loopback interface. This way you'll benefit other users


So can I continue to see my Neocities website even when the service is down?


Accessing it through IPFS, sure, though I wouldn't know how it would be indexed.

What's actually happening is that Mr. Neocities just put up a heck of a lot of data redundancy for his website for free.


File: 1483393832511.png (67.89 KB, 200x80, 2017-01-02-224129_3200x1080_scrot.png)


Blacklists are a reasonable feature. The client just needs to be open about them, plainly stating there is one enforced by default and letting users have complete control over them.

As far as I can tell it's a feature still in its infancy, but it looks like the devs are more or less moving in this direction. I don't really find much to complain about.


File: 1483431971082.png (45.83 KB, 200x150, 1367956475300.gif)

gnunet doesn't have this problem
it also happens to have anonymity build in from the get go
no idea why any of you think ipfs is a good idea
what am i missing or not understanding?


GNUNet just doesn't have a fancy marketing department like ipfs. If people actually knew what it's capable of everyone would be using it.


Here's an IPFS browser:
How to build it on Linux?


IPFS is muc, much more ambitious than GNUnet. GNUnet also doesn't even seem to be wanting to replace the current web as it still uses http and https.
As far as privacy goes...
Link encryption(used by GNUnet) works well within trusted networks, however it is completely useless on the open internet, as you can't possibly trust all the nodes your packet will travel trough. Simply, the NSA has to just find one node on your route(your ISP) and subpoena it, nobody will go to jail for you. Therefore this looks useless against state actors.

On the other hand IPFS simply uses tor, which has proven to be at least somehat successful against state actors.



Wait there ...

IPFS doesn't use tor. It CAN be used with tor. It doesn't implicitly use tor. IPFS is a DHT implementation to statically address any content. And then distribute it in the most performant way possible.

IPFS isn't meant for privacy and anoymity. It's meant for scalability and redundancy. The problem they're trying to solve is not the one of espionage, but rather of how fickle centralization of services/daemons can be.

Now that's not to say that you can't use IPFS to overlay your own protocol ontop of it to guarantee certain things. Such as making a file upload form on Tor that then gets pushed to the ipfs swarm. Other servers can then be instructed to pin addresses that are generated from that Tor endpoint providing redundancy. Should the Tor node be found, other servers (hopefully outside the jurisdiction of any malignant state) can still provide 'routes' to the addresses published through IPFS from that node.

The one use case this is most useful with is leaks.

Wikileaks can have an IPNS resolve to a JSON formatted list of addresses that it has "published" -- along signed detached signatures for those documents -- and other IPFS nodes can be scripted to resolve the IPNS entry for wikileaks and pin their own local copies. This way no government can ever take down the specific files "Wikileaks ipfs" publishes, even if they take down the initial ipfs node that published the documents.

The biggest problem people have with IPFS is trying to understand


Forking and commenting this out won't be a problem.

also further down a dev says this:

>Hey, Dev here.

The blocklists you talk about are user configured, and the only place we will enforce them is on the gateways (ipfs.io) that Protocol Labs runs. All other users can opt in to respecting these blocklists, but we have no way to compel users to respect a given list. Even if we did 'hard code' a given list into the default client, it would be fairly trivial to remove (someone would definitely make a patch set and build setup) and I'm very much against forcing such things on users.

So that makes ALOT of sense. If your running the ipfs daemon yourself you're free to download what ever. But public gateways are under more scrutiny. Same issue with Tor Exit Nodes. Certain measures are needed to comply with local laws if they're provided for the public 'good'. Also, again if you actually read that post and a few of the links... they're opt-in.

So a non issue.

As a side note, you can make "isolated" ipfs networks that are meant to service intranets or specific "networks" by editing the bootstrap servers. As long as the bootstrap servers don't update themselves with ipfs servers on the 'outside', you'll have a self-contained ipfs network for CDN purposes.


>GNUnet also doesn't even seem to be wanting to replace the current web as it still uses http and https.
GNUnet is a layer under http/https, right?


>IPFS doesn't use tor. It CAN be used with tor. It doesn't implicitly use tor.
That's what I meant. I should have been more specific.


people who have no idea what they're talking about should consider not talking as though they do
everything you just said about gnunet is the exact opposite of correct
next time, at least gloss through a projects wobpage before you start spewing misconceptions and lies about it please


Will I be able to make a bot that would predictively try to predownload any websites and links that I might like onto my raspberry pi, and then when I actually want to see it, my PC just downloads it from the local net?


Not that anon, but dude, atleast cite and illustrate your reasons, both of you.


predictively? What is it that you're predicting first of all? Do you mean generate content hashes on the ffly and seek to cache them when you hit a valid one?


I was thinking more like:
I visit a blog and then my bot starts preemptively caching every link on that blog(for 1 level only, let's not download the internet) before I visit any of those links
Or let's say I go to lainchan.org, but once I am there, and while I am looking at 'recent posts', my bot is preemptively cacheing the index of each board and all the threads that are in 'recent posts'.

But the important thing is that the bot is on my Raspberry pi or on another designated device on my network acting as a peer.


You could get something working quickly with wget.

The Lainchan example you gave would require special knowledge of how Lainchan is structured.

With that, you should have a generic website helper and a specialized website helper that will be called instead, if available.


If you are having trouble installing IPFS on BSD, as in once you install it and you try 'ipfs help' and you get nothing this might help you:
'which ipfs' or 'sudo which ipfs'
If it says:
Then do:
sudo mv /sbin/ipfs ~/

Now try 'ipfs help' and it should work.


You should not be using sudo for that. Do something like cp "$(which ipfs)" ~/.



I have a question: if A and B both separately expose the same file (matching checksums and all) to ipfs, could both of those files be reachable using the same address?


Yes. If A and B both upload StealThisBook.epub to the network without knowing each other, and say B wants to send the book to C. Then he would send the hash to C who would proceed to download pieces opportunistically from both A and B. Then there would be three peers sharing the book when D wants to share with E.


i like the way multiple people didn't read the original post and are complaining about


It definitely looks awesome !

I wonder what happens when person A put files with a specific hash, and then person B looks for it. Does person B automatically host a copy of these files under the same hash ? How does "spreading files" works ?
Also would it means that if A let persons B1, B2, .. BN access to these files, accessing to them would get faster, and if A deletes the files it is still available ?



"Getting a file" downloads it and seeds it. Unless it's pinned, it will eventually be garbage collected. This is configurable.


Just some stupid soykaf that I added:


QUESTION: I can freely skip the export path and bootstrap steps? I did and it works finely for me.


How feasible is a regular internet chan that hosts literally every image/webm/pdf/whatever on IPFS?



Can the immutable object be something that contains links to other immutable objects? Or a blockchain that references objects in the d's?


Worth mentioning that 0.4.8 is out which includes the long awaited filestore code (IPFS doesn't need to make a second copy of the data anymore), unixfs directory sharding, and private IPFS networks.