[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /tech/ - 35356

File: 1489045833082.png (53.78 KB, 300x200, plato_statue_greek_philosophy-100666783-primary.idge.jpg)


Let's do a little thought experiment, lains! What's your idea of the best possible internet replacement, and how would the devices connected to it behave, what would their design be, etc. Feel free to ignore today's limitations on technology if you like, and go as high-level or as low-level as you like. This thread is only meant to be a discussion in hypothetical terms, but if any of it happens to be practical that's fine too.

Pic related because Ideal Forms


The biggest problem of the internet right now (in my opinion) is how easily-monopolized it is. Whoever is willing to spend the most money to centralize it to them is going to control everyone's lives. Decentralizing/distributing is a must to combat malicious/authoritarian services like Google.
It should be fundamentally impossible to trace anyone, and it should be fundamentally impossible to require someone to identify themselves. That should be completely voluntary.
Basing services on free software is necessary. Richard Stallman, as crazy as he sounded when forming GNU, was right: if you rely on a piece of software you have no control over, you're doing something wrong. Facebook and Twitter have the power to control public discourse on a global scale and no one can do anything about it beyond just not using them.

TL;DR: make it impossible to curb civil and individual liberties. You can't trust the powers that be to play nice and not exploit inherent flaws in the protocol. Personally I think IPFS/I2C/TOR are great first steps.


I would like to connect myself to the world and leave my physical body to join the vast world of information. That way I wouldn't be lonely anymore, everyone and everything would be connected.


If you're just asking for daydreams, I like the ideas behind CJDNS and would love to see them implemented. It fulfills pretty much what >>35361 is talking about (decentralized, encrypted, though not necessarily anonymous) at a low level in the network stack. In its ideal state, it would add considerable redundancy to the network, encrypt all users' traffic by default, and go a long way towards destroying the internet providers' hegemony.

For those who haven't heard of it, it's a protocol designed for mesh networks, in which packets are able to take the shortest path through a series of nodes without resorting to the "backbone routers" on which we all currently rely. From what I understand, each node in the network maintains its own routing table, but only of the nodes that are (topologically) closest to it. Your neighbor passes you a packet and the only part of it you're able to decipher is who it should be passed to next. Obscuring the sender, receiver and content of every packet is core to the protocol. Nodes are identified by the (hash of) their public key, rather than an IP address, and packets are signed with the private key so their origin can't be spoofed. Someone more knowledgeable can correct me if I'm mistaken on any of these points.

As for how it would be implemented, there's already a network called Hyperboria running it over the conventional internet (UDP) but it's designed to work on bare ethernet/wifi connections between users. Nodes can be anything from big data centers handling massive bandwidth, through the local nerd who sets up a "neighborhood ISP" with a server and a couple of GPUs, down to a little SBC plugged into the cafe's phone line. And of course your phone and laptop would maintain their own, ever-changing routing tables and pass packets around in the background wherever you went (in exchange for which, everyone else's phones would pass your packets around). Surely net bandwidth would go through the roof in a scenario like this, especially if the actual services using them were taking advantage of the new network topologies (e.g. IPFS devs even now use it over CJDNS no problem).

Sorry for the length, you asked for Ideal Forms.


We have good laptops with an operating system that is essentially nothing more than a web browser. Or rather, the systems are essentially Emacs. But with a better scripting dialect (maybe similar to racket that supports #lang's, so everybody gets to have their favorite language).
The implementation would be very simple at the core, no boat, especially tailored for interacting with the network. The protocol would be something better than http, and some bytecode would be in place of js (like webassembly in the hopefully near future). The interface and the system would work synergically, while it would remain buffer oriented like Emacs has always been, it would also have primitives to make all sorts of applications that would be displayed well without workarounds and no abuse of javascript (or the bytecode that's in place) would be necessary, and would also be discouraged. Things like games would certainly be possible, but also you wouldn't need to scrap horrendous HTML, and it would all be better handled internally (for example, a sidebar menu would be a sexpr called *menu1* and you would be able to display it however you wanted or not at all).
Of course, internet access would be far more widespread. Perhaps the devices have G4 and it's taken from the greedy hands of cellphone providers and gets used properly.

I feel I'm missing something.


Massive amounts of fiber optics running under every street and into every home and building. Walls are fitted with computer networking sockets instead of soykafty phone lines. Networking cables are normal and phone cables are made fun of as the obsolete technology they are.

Everyone simply IS on the network; you don't need to ask permission or prove you're a subscriber. It's your right. Network switching equipment is maintained by the city; more local switches can be set depending on geography.

There are no data caps. Your bandwidth is variable; equal to the total number of physical bandwidth available divided by the number of concurrent users.

The system is made with the future in mind. There's ample trench space to lay additional fiber or replace old fiber if needed; ample budget for additional or more modern switching equipment to be installed.

Private ISPs must pay to peer with this network, and must provide an equal amount of bandwidth


quantum entanglement networking.

From my (very basic) understanding of entanglement, you would be able to entangle some particles and then separate them. input a change into one and the change will be reflected in the other. So it would be possible to create unblockable, untrackable comms devices.


Every problem with today's network is inherently political or economical: surveillance, control, obsolete things thanks to regulations, all of these supposedly countering bad people doing bad things, coupled with the greedy, hungry profit machine.
The internet is made and maintained by society, you cannot fix it without fixing society. Alternatively I could say that the internet is great as it is, the only problem is with users' personal attitudes towards the political and economic implications it has.
Networking has been around way before the use of electricity, and humans are an integral part of the network. Trying to fix just the standards, protocols, devices and the wiring is like not even half of the job.

I don't want to discourage anyone though, perhaps fixing society does require better networking; just don't try to isolate the problems when they are connected with society better than google's data centers with each other.


OP here, I was hoping for some in-depth stuff from at least some posters, so thanks. This is really interesting to me.
I like this idea a lot, it sounds like a potentially huge leap forward for net neutrality if done right
>Networking has been around way before the use of electricity, and humans are an integral part of the network.
this is a great point, we should all probably remember sneakernets.


>this is what communists really believe<


I think better integration of our devices into the network all the way through is important. Even with "cloud computing" being popular we still pay a lot of attention to what's local and what's remote. I'd like to remove this distinction somewhat and have any device capable of seamlessly operating as a client when it needs to. They would be flexible in this and hide messy details from the user. For example, if I run a video intensive task on my laptop it might prefer to use my desktop as a video processing server. If the desktop is powered down (and I don't trust the rest of the network) it will fall back to rendering itself. As much as possible, the distinction between "do a thing" and "ask another machine to do a thing" should be automated. The only time a user should worry about something being local is with keys. If I ask for a list of all the tasks my computer can perform it should give me every task available on the network. The web, as we use it, doesn't fit well with this idea. There's still a place for personal home pages and business brochures but most of the stuff we use the web for, like "tell me the atomic weight of oxygen", "commence brewing soykaf" or "show me boobs" should be their own services.

While it's an idea that's great for usability and convenience it has a lot of freedom problems. Decentralisation and security must be paramount concerns. I'd like a trust network in which peers perform computing tasks and offer data storage to one another and keep track of how much of a contribution/drain to the network a host or user is, hopefully with homomorphic encryption protecting privacy. Services should not have a centralised or private data store or trusted operators (e.g. a dedicated server). Things like that should be enforced with encryption rather than centralisation. For instance lainchan would mostly just be plain post data distributed across the network. Posts from mods would be signed with that mods key. Mod announcements or whatever would be encrypted with a key known to the mods. The code to handle these operations would likewise be stored distributed and any host would be capable of performing them provided it had the applicable keys of course.

The network would redistribute itself based on usage. So the more chan posts I make the more likely it is that the chan posting code will be downloaded locally. Likewise if I do a lot of video encoding on a weak machine the network might move the code to my neighbours computer. It might even make a copy of my videos to his hard disk. While there's no reason that hosts can't enforce their own rules on interaction as they see fit, hosts would be incentivised via the contribution/drain metric to do what's best for the network as a whole. So my neighbours computer would begin offering the video encoding service because it's calculated it's a good opportunity to earn brownie points, it might even get more from my tasks because of the low travel distance. It's reasonable to think that, if it was no effort, people would happily donate their spare cycles/disk space to the network even when they were well in credit. This would hopefully mean that the cut off point for too much debt would only be a problem for malicious agents.




New services should be creatable by anyone though in order to create a recognised service you must provide the means for others to reproduce that service in the form of a binary with globally accepted rules about it's format. The trust system must account for malicious services vetted by humans. Most little used services that aren't sufficiently vetted will only ever run on the host that creates them simply because no other host trusts the code. I don't think providing source code should actually be mandatory. The binaries will have to be more transparent than usual to ensure they can still be correctly linked for any host and most people will benefit greatly in the vetting process by providing the source. Patches and updates should work likewise and the code that controls the network itself should be included, though perhaps with special rules to hinder malice or ensure uniformity. There'll also have to be a system to discourage deliberately avoiding vetting in order to centralise the service.

Data on the network should always have an ownership associated with it. Data is always readable by anyone (after all it could be on their drive) but hosts should refuse attempts to update or delete data they're storing until ownership is proven. While for most normal user data ownership will be simple the data generated by services might have more complicated access rules. Though it's not strictly essential TCP should be rethought. While there's obviously still a practical need for host based routing communication should be organised primarily by task. It should be common place for a host to get involved in communication only to prove it owns the right keys or for a host to easily receive batches of related data from many other hosts.

Practically speaking there are a lot of problems with this. The difficulty of implementation and the necessity of rewriting everything in this hard to use distributed environment complete with possible code and data version mismatches and malicious hosts is not just a herculean effort but going to cause security and stability problems. I also don't know much about trust networks and I'm not sure one can be made to these specifications. If anyone has insight I'd be much obliged. I can see problems with host identity (what's to stop a spammer from running up a load of debt and then pretending to be a fresh host?) and I wouldn't be surprised to find there are better ways to get hosts to act for the good of the network and mitigate malicious agents. I'd like to provide an incentive for people to check services for malicious code. The credit/debt system is intended to incentivise people to set up their host to maximise it rather than a real reward. The system also needs a way to handle charity. If computing power and storage space are in abundance nobody will mind if scientific research runs up a giant debt. It also seems likely that it'll be necessary to ensure only a single version of any service is used network wide, barring update time of course, so hosts are as indistinguishable in their workings as possible which is bad news for rewrites.

As you may have noticed this is something I've given quite a bit of thought and there's a lot more I could say. I'm very slowly designing an OS with these ideas in mind, though I've been more concerned with the programming challenges of having hosts communicate than the networking ones of finding the right hosts to communicate with. Don't get me wrong it's just a hobby and I'll eat my hat if it ever goes anywhere. Like I say, if anyone has thoughts on any of this I'd love to hear them but especially concerning the trust system, network redistribution and possible crypto practices. I don't know much about those areas and for all I know my ideas are totally impractical or I'm missing something really useful. Please, shoot me down.


I would completely remove javascript/programmability, all the stupid scriptable garbage like xmlhttp and local storage, web fonts, iframes/frames, includes in css, 3rd party including of anything except maybe images and videos. Pretty much I would trash every thing except html, css, and non interactive media like video.

Once the boat is gone speed wouldn't matter as much. So I'll go star trek and say the internet should use subspace wireless signals.
Advantage: It would work anywhere.
Disadvantage: It would probably give us cancer.


why ditch programmability? sandboxed scripts are very useful for doing things such as hashing passwords client-side


File: 1489221575992.png (64.21 KB, 200x184, bells_theorem_2x.png)

It is impossible.

The problem is any single measurement will collapse the whole system.

In other words, prior-information about how to make measurements have to be shared, in order to make the results of measurement of quantum entanglement system to be meaningful.

This prior-knowledge have to be shared or transmitted in other ways (i.e. a one-time pad, or Internet commutication) before you make any measurement. You cannot use quantum entanglement as a communication device, as far as we (currently) know, and it is not a simple limitation of technology.

And don't forget the speed of light is a restriction to speed of propagation in space of any information.

It is possible to send an encoded qubit on some mediums, like a optical fiber, though.


> why ditch programmability?
Because there are a million bad things that the web browser programmability is used for. They far outweigh any useful things anyone could could think of and frankly it going to get worse.

> sandboxed scripts are very useful for doing things such as hashing passwords client-side

A simple solution would be a html input element that hashes the password before sending it. This would prevent it from being sent as plain text maliciously.


>They far outweigh any useful things anyone could could think of and frankly it going to get worse.

What makes you say that? How do you know all the useful things that anyone could think of?

Anyway, this is all theoretical, so we're not limited to current browsers or javascript, both of which are soykaf. A proper language, with provability properties possibly, that is fully sandboxed by the browser would be very nice. It would probably be more secure than any software running on your computer right now.


CJDNS should replace IP. TCP is okay but I feel like it could use some rejiggering.

i2p should replace tor as the predominant anonymisation network. If you melded i2p and CJDNS you could probably get nice speeds.

I agree with >>35476, javascript is soykaf but you could make a better scripting language for general web browsing.


While Everyone is daydreaming about the future.
GNUnet now lainchan or there will be no future you daydream of and only the matrix with Microsoft instead of the AI.


>CJDNS should replace IP
first line of the git
>Cjdns implements an encrypted IPv6 network


File: 1490296284291.png (51.29 KB, 200x160, 1489689602443.gif)

Those lainons' posts are on point. I absolutely agree with the fact that current technological base of internet is mostly good, and the core issues lay solely in the society and the management of the whole internet, monopolizing basic services, and such. This is what people should focus on to solve first rather than reinventing the wheel (aka basic protocols, and implementations which are really good as they are now)

I have very mixed feelings about this, since i don't really want to use something that i completely don't understand. Pottential increase of privacy level doesn't justify this for me

It's more of a communist utopia than feasible solution tho

I personally don't like the whole idea of automaticizing and hiding the operation details even more from a user. It would be leaving users with even less control over things, and i'm absolutely ok with requesting some remote service explicitely, than having my OS/other software decide when to use local/remote processing, that just ads nothing but a mess for me, so no, thanks, and these are words of Windows user, so i bet a typical Lainchan Linux poweruser is likely to be evenn more disgusted by this idea than me. Also i don't wan't to be a server to anyone.

You went a bit over the top with this, but generally i agree, JS overbload is a cancer of web since long years. I don't mind pages serving it if i can block it easily with tools like uMatrix where i don't trust it, and allow it where i do and find it really useful, BUT they should provide equivalent functionality for browsers that opt-out of executing it. Not even mentioning leaving the user with a blank page of nothing, a thing that is getting more and more common, sadly.


>the Internet and the protocols currently used in the TCP/IP stack are really good
We have 2 ways to address an interface and none to address a node or an application, TCP is retarded, IPv6 is a mess that creates more problems than it solves, multihoming and mobility are, have been and always be a dirty hack etc.
The current stack is a frankenstein held together by tons of duct tape.
If you really want to see a well designed network architecture, look at RINA. I hope it'll get some traction and replaces TCP/IP at some point.


It uses IPv6 addresses but its routing and address allocation are not like the current IP/DHCP architecture.

>TCP is retarded

what about TCP is bad? It's not very complex and does what it's supposed to do well.


>I personally don't like the whole idea of automaticizing and hiding the operation details even more from a user
All of the operating details are hidden from you whatever you're doing. Computing without these layers of abstraction hasn't really been done since the 60s. The operating details would be no more hidden under this scheme than any other, just more flexible.

>It would be leaving users with even less control over things

Not at all, if anything quite the opposite. Users who don't change the default config will have the same no control they already do. Everyone else can still do entirely as they please and if you do choose to use a remote service you can verify that the computation was done honestly.

>i bet a typical Lainchan Linux poweruser is likely to be evenn more disgusted by this idea than me

I am one of those people so I'd think you'd lose that bet. The OS is still going to do whatever the user asks of it, it just has more options for achieving this because everything is built to be capable of distribution. If you want to insist on doing everything locally you can.

>Also i don't wan't to be a server to anyone.

This is the big one. Why not? It'll only ever use your spare resources that would be going to waste otherwise and in return you get access to everyone else's spare resources if you should need them. Seems like a pretty good deal to me.


You guys are so simplistic when thinking about it...
let you mind expand, be a free spirit (nietzsche concept).

a tip: netsukuku - DASH7 - SPHINCS - steganography - seL4 - LowRISC - faraday cage

second tip: caracteristica universalis - mind upload

you're not too deep into the rabbit hole yet fellas.



seL4 is a microkernel, it doesn't do much of anything by itself. Put genode on top of it and now you're talking.