[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /tech/ - 35801



File: 1490169148384.png (236.72 KB, 300x209, 80s_c8f377_5848577.jpg)

No.35801

Will there ever be a rival to our x86_64 world of von Nuemann architecture? I mean, the bottleneck is real after all. But on the contrary, the big dogs like Intel have billions to spend and maintain there lead, production, research, etc..



Lisp Machine revival when?

  No.35802

>>35801
When we can 3D print viable ICs and processors for cheap from home, the computer revolution will finally begin.

  No.35803

A rival to the von Nuemann architecture in the field of general purpose computing is very unlikely. When I say that I'm speaking of the core principle of homoiconicity rather than any specific implementation. So the modified harvard architecture (which x86_64 is) counts even if it's not technically von Nuemann. Lisp machines would probably also be included.

The von Nuemann architecture was revolutionary because it revealed to us what in hindsight seems to be an obvious truth, code is data and while alternative architectures (e.g. dataflow) that fail to exploit this principle can be great within their niche, they fall short at general purpose computing due to the difficulty of having them perform new tasks. The only reasonable way we're going to see a move away from this architecture is if the way we use our computers changes drastically.

There's also the issue of efficiency. The architectures we use aren't built to be simple or easily understandable. They're built to be fast and quite rightly. We build tools (compiler, OS etc) to make them easy to use and end up with the best of both worlds (or close enough). A computer that's simple and consistent all the way down sounds great for us programmers but it's not going to fly when it takes half an hour to render my email and the vast majority of users aren't going to care in the slightest that the programmer didn't have to spend time wrestling the compiler into submission. Even if you manage to build an architecture like this and magically make it popular somebody is going to emulate it in x86 and then everybody will use that because it runs on the machine they already have 12 times faster.

The idea that you'll make something that is simple and consistent and then, once it's taken off, put in the research to make it fast is naive. You will find that in order to maintain the workings that you already have and get efficiency you must sacrifice your simplicity and consistency. The only reason x86 is as it is is because this has already happened. At it's core the von Neumann architecture is not so complicated but, in the pursuit of efficiency, it's turned into a behemoth full of caches, pipelines, co-processors and virtual registers.

So, yeah, it's possible that someone could design new a homoiconic, efficient architecture for general purpose computing but that wouldn't be such a big change. We're certainly likely to see new innovative architectures for niche computing but a big departure from what we already have in general purpose computing is full of problems. Maybe they're solvable and somebody is going to come up with something that runs fast and remains simple but probably not.

  No.35809

File: 1490201709543-0.png (646.9 KB, 232x300, PB003-110412-F18A.pdf)

File: 1490201709543-1.png (648.58 KB, 232x300, DB001-110412-F18A.pdf)

I sincerely doubt Lisp Machines will be revived in a traditional sense. Hardware may be made to easily execute Lisp, but I'm skeptical that anything like a Symbolics machine will return without a large and centralized effort.

It would be nice to believe hobbyists being able to build their own efficient hardware from the ground up would lead to a great deal of innovation, but I can also see this leading to a single hobbyist architecture becoming standard because software would already target it. I suppose we'll just need to see what happens.

Have you ever heard of the F18A? It's a dataflow machine composed out of 144 small stack machine nodes. It's hard to get out of the mindset of the von Nuemann architecture at times. One wonders how anything can happen without some order. A dataflow machine such as the F18A makes sense though, since it's just connected von-Nuemann machines. Once one eases into understanding that, it gets easier to understand more radical architectural shifts. Just as it's important to consider many languages and those that don't yet exist, I believe it's important to understand the same with architectures and how there's nothing inherently making von Nuemann or any other architecture what a computer must be. It doesn't seem like enough people dream of better computers.

>>35803
>The only reason x86 is as it is is because this has already happened. At it's core the von Neumann architecture is not so complicated but, in the pursuit of efficiency, it's turned into a behemoth full of caches, pipelines, co-processors and virtual registers.
I'd argue this has more to do with backwards compatibility than pursuit of efficiency.

>We're certainly likely to see new innovative architectures for niche computing but a big departure from what we already have in general purpose computing is full of problems. Maybe they're solvable and somebody is going to come up with something that runs fast and remains simple but probably not.

I very much look forward to a world with more specific machines. Composing music on one machine, creating visual art on another, programming on yet another, and using yet more for other tasks sounds so nice. For one, a single machine becoming incapacitated probably wouldn't completely cripple one's workflow. For another, viruses would find it harder to spread, as many machines wouldn't need to be connected to the internet nor would a virus probably target such specific and possibly unpredictable hardware. As one last advantage, perhaps these machines would actually be good at what they're meant to do.

  No.35810

>>35809
>I very much look forward to a world with more specific machines. Composing music on one machine, creating visual art on another, programming on yet another, and using yet more for other tasks sounds so nice. For one, a single machine becoming incapacitated probably wouldn't completely cripple one's workflow. For another, viruses would find it harder to spread, as many machines wouldn't need to be connected to the internet nor would a virus probably target such specific and possibly unpredictable hardware. As one last advantage, perhaps these machines would actually be good at what they're meant to do.
Or a manufacturer could give you a walled garden, telling you what you can or can not do on a machine.

  No.35811

Quantum computing might change the architecture quite drastically.

  No.35812

>>35811
ehhhh... it's a mistake to think that quantum computing will ever replace classical computers for most things. They're more expensive and, for a wide class of problems, not any faster. Moer likely they'll act as a sort of "oracle", used to speed up computation for specialized problems(database searches and factoring, mostly).

  No.35814

>>35812
If we look far enough into the future I am sure quantum coprocessors are gonna wiggle themselves into home usage eventually.

  No.35820

>>35801

I don't know about Lisp Machines but with the end of Moore's Law there will certainly be some interesting new developments in architecture. In the old days you was always able to relay on the fact that computers will be better in a few years so there wasn't much incentive to research more because it is very hard to compete with exponential growth. Also Moore's Law was a useful thing for a chip manufacturing companies and end users. This was because it introduced predictability and stability. But with the end of Moore's Law new ideas will not be forced to compete with exponential growth so I think that there will be some new developments as from major established brand but also from small research groups and startups.

  No.35822

>amd64
Respect where respect is due.

  No.35823

>>35810
>Or a manufacturer could give you a walled garden, telling you what you can or can not do on a machine.
They're already trying that; the solution is still to avoid buying hardware you can't control.

  No.35825

File: 1490212677638.png (27.06 KB, 200x143, Here+is+a+smug+lain+_d5f454cc96d511f783e2d0cb36a4a569.jpg)

>>35814
Oh you mean those Knights acceleration chips?

  No.35831

>>35809
>I'd argue this has more to do with backwards compatibility than pursuit of efficiency.
There's truth to this and maintaining backwards compatibility is the only reason that many components still exist but, it was in the pursuit of efficiency that many of these components became legacy in the first place. That said, you're certainly not wrong. There are components that are now totally legacy and only ever existed to enable backwards compatibility. There are other things as well. The TLB isn't fast or enabling backwards compatibility. It's just supposed to be useful. It was a mistake for me to say the only reason is efficiency.

Backwards compatibility is a bitch though. It's just not an easy to solve problem. You're always in this position of not being able to update the hardware in a way that breaks recent code because it's still in use and not being able to write code that doesn't use the backwards compatibility features because it won't work on older hardware. I do think more could be done though. For instance, our computers start up in 8 bit mode before bootstrapping up. If you get rid of 8 bit mode and make them start up as even just 16 bit you break every bootloader. Likewise I can't write a bootloader that doesn't do the bootstrapping because it just won't work. Thing is, surely, we could build a machine that starts in 16 bit mode (or even 32 with the A20 line disabled still) and simply ignores the bootloader's requests to switch upwards. Old bootloaders will still work and, once starting in a higher word length is standard, new bootloaders can stop bothering. That's probably one of the easier examples though.

>I very much look forward to a world with more specific machines.

I know a lot of people want this and it's what I was thinking when I said "unless the way we use our computers changes drastically" but I just don't see it happening myself. Maybe it's just my specific requirements but I'm the type of person who will take up a new task just to give it a try. That wouldn't be reasonable for me if I needed new hardware every time.

>a single machine becoming incapacitated probably wouldn't completely cripple one's workflow

This is true but you also increase the chance that a machine does become incapacitated and when one does I can't just use any other machine I have as a temporary replacement.

Continued.

  No.35832

>>35831
Continued.

>many machines wouldn't need to be connected to the internet

I cannot imagine a machine that wouldn't want a connection. A music composition machine that can't download a sample or an art machine that can't download an image isn't great. For any task there's the possibility that I want to do the first half and then send what I've done to someone on the other side of the world to finish.

>perhaps these machines would actually be good at what they're meant to do.

How so? What would you get from a music composition machine that you don't get from a general computer with a music editing suite? Or if that's a bad example any task is the same. Failure is not an option is a core principle of general purpose computing. The efficiency bonus would likely be outweighed by the fact that I can only spend a small portion of my computing budget on the music machine. When you consider that the majority of these computers will need to reproduce components found in another machine (visual display springs to mind) it's going to take more resources to get the same results.

It occurs to me as I'm writing this that perhaps there's some nice middle ground here. I though, well most machines will want a screen, so why not save resources by having a single screen that you can plug into any of them? But then you may as well put some video processing hardware in the screen, any machine that uses the screen will need some, so we'll save on duplication there but if you take that idea all the way you end up with a general purpose computer with better support for additional components like sound cards. Doesn't sound so bad and it would be a departure from what we have currently in that control would be inverted. Currently your core system controls your sound card rather than the other way round.

>>35814
They might but another co-processor is not a big architectural change. They'll probably work much the same as video cards do today.

  No.35833

File: 1490226270925.png (548 KB, 200x150, 1459133679014-2.jpg)

>>35832
It's not that I'm blind to the inevitability of progress but I find it funny imagining that with limits of current technology and just how much care and space is put into maintaining entanglement of single qubit pair and connection apply simple gates on arbitrary qubits. I mean like universal quantum computing in our homes is so hard to imagine. I would suspect it be a cloud service for a long long looong time.

  No.35851

File: 1490298187021.png (179.43 KB, 200x185, 1483660872441.png)

>>35825
Can you get your hands on one of those for me, Lain? Pretty please? My wired access is so slow.

  No.35855

File: 1490300979985.png (14.82 KB, 200x162, no-cloud.jpg)

>>35809
>>35832

>I very much look forward to a world with more specific machines.


You lains are crazy. Why would you want to throw The Universal Computer out the window? Any non-universal computer is just gonna be a universal computer with DRM on top, just by the nature of this universe (everything useful is Turing complete).

More modular computers? Yes.

All software runs in isolated VMs by default? Yes.

–°omputers are stateless and you can have all your state on an external encrypted data stick accessed through open-source FPGA (so that encryption keys would never be stored in memory of mass-produced hardware)? Yes, please!

A computer that decides what you can do on it for you? What the fuarrrk, no!

  No.35863

Altera FPGA platforms cost <100 now GET CRACKING LAINONS

  No.35873

File: 1490366974415-0.png (493.96 KB, 143x200, Sansa_Clip_Zip.jpg)

File: 1490366974415-1.png (11.97 KB, 200x62, rockbox400.png)

>>35855
>You lains are crazy. Why would you want to throw The Universal Computer out the window? Any non-universal computer is just gonna be a universal computer with DRM on top, just by the nature of this universe (everything useful is Turing complete).
When did I ever write that these specific machines I would care for wouldn't be Turing complete?

>More modular computers? Yes.

Modularity in hardware generally leads to a hellish landscape of incompatibilities. Observe the IBM PC.
Of course, hardware made for hackers by hackers would necessarily be alterable in some fashion.

>All software runs in isolated VMs by default? Yes.

That's far too complex; besides, VM authors and hardware manufacturers have shown themselves to be incompetent enough to make this insecure.

>–°omputers are stateless and you can have all your state on an external encrypted data stick accessed through open-source FPGA (so that encryption keys would never be stored in memory of mass-produced hardware)? Yes, please!

I've had a similar idea before; it's nice to see it from someone else, although I didn't care about encryption.

>A computer that decides what you can do on it for you? What the fuarrrk, no!

That's many current computers, not what I envision.

Take a portable music player I own, as an example. It's a universal computer, but it's obviously not designed for that. It's very obviously designed for a single, specific purpose and it serves this much better than a laptop would, due in part to its size. I use Rockbox with it and have no issues. I need not worry about security, because it doesn't connect to the internet and I store nothing sensitive on it. Extend this notion to various forms of development, such as visual art and programming, and you'll understand what I mean. There's no reason the software on a music development computer couldn't be customizable in a way easy for musicians trained in it. These special machines could even have a ROM storage so that the machine would be much harder to incapacitate than a general computer, which can easily delete its entire software.

The general computer, while at times nice, is generally worse than what it could be replaced with.

  No.35874

File: 1490369878654.png (2.54 MB, 200x161, 672582.png)

>>35855
I coudn't agree more with you, lainon. I completely refuse to share that weird fetish for specialized computers. I mean, specialized devices are good, but on Lainchan it often comes together with resentment to general computing, which is the highest form of computing for me, personally. I'm still not sure if it's general consensus and hivemind among most of lains, or just a bunch of guys talking about this soykaf constantly.

Peaceful coexistence of general computing machines and specialized ones has my thumbs up though.

Also what the fuarrrk is wrong with x86 architecture in general (count out such "neat" bonuses like AMT/PSP and other soykaf, found in modern implementation, this is out os scope, and not the part of ISA itself), and why would you want computers to execute Lisp directly? I believe, parsing such relatively high level language would've been x times slower than executing typical machine instructions we have in various existing architectures. Don't want.

>>35873
No, modularity is imo one of the core reasons i love platforms that follow that design principle. As for incompatibilities... there are those neat things called standards, to prevent it from happening. I know, standards aren't always faithfully followed, but it's not a problem of purely technical nature, it's the problem with people responsible for building the HW.

  No.35875

>>35874
>I mean, specialized devices are good, but on Lainchan it often comes together with resentment to general computing, which is the highest form of computing for me, personally. I'm still not sure if it's general consensus and hivemind among most of lains, or just a bunch of guys talking about this soykaf constantly.
I believe all of the times you've read that may have been written by me.

>Peaceful coexistence of general computing machines and specialized ones has my thumbs up though.

I was never advocating for the death of general computers; I just wouldn't want to use them.

>Also what the fuarrrk is wrong with x86 architecture in general (count out such "neat" bonuses like AMT/PSP and other soykaf, found in modern implementation, this is out os scope, and not the part of ISA itself), and why would you want computers to execute Lisp directly?

The x86(_64) architecture is extremely complex. Backwards compatibility leads to different layers interacting in strange ways, which may be intentional security flaws. This architecture is the only one I can currently recall that has a manufacturer library solely for manipulating and classifying instructions:
https://software.intel.com/en-us/articles/xed-x86-encoder-decoder-software-library

>I believe, parsing such relatively high level language would've been x times slower than executing typical machine instructions we have in various existing architectures. Don't want.

Lisp wasn't parsed as text like this. A Lisp machine had primitives for efficiently manipulating linked lists and whatnot, along with enforcing type systems, array bounds, and whatnot.
Here's a good starting point for learning more: http://fare.tunes.org/LispM.html

>No, modularity is imo one of the core reasons i love platforms that follow that design principle. As for incompatibilities... there are those neat things called standards, to prevent it from happening. I know, standards aren't always faithfully followed, but it's not a problem of purely technical nature, it's the problem with people responsible for building the HW.

If you disregard the many years of effort put into monolithic behemoths such as the Linux kernel, it becomes an absurd task. If I can't implement it on my own or with a small team, I'd rather not use it at all, because it's probably too complicated.

  No.35884

>>35874
Well, it's not a fetish for me, really, but I gotta say, the idea seems attractive in an odd kind of way. I wouldn't do without general purpose computers altogether, but being the proud owner of an HP calculator, I can see where this comes from. They are comfy. I also have a tablet which I use for reading and watching videos, so I segregate my computing tasks. I also imagine a scenario where we'd have laptops that are little more than just web browsers (in a dream world where the web isn't the horrible thing it is today).
On the other hand, having a soykafload of little devices can get annoying. But that's just because I'm a bit of a technophobe (a technophile techophobe :)
I think we're approaching that world in a way though. We have music players (does the iPod still exist?), cameras, cell phones, e-readers, GPA devices, video game consoles (including handhelds), calculators, and have you seen the music devices made by teenage engineering?
And as for "by hackers for hackers", we are pretty much covered by stuff like the arduino.
So yeah, there's an appeal to it, and with the current state of operating systems, it is very much appealing against having some GNU-ridden operating system full of boatware code and incompatibilities between libraries constantly breaking things.
Modern operating systems have done nothing but frustrate me.

  No.35896

>>35884
T/B/H, with regards to "dedicated/specialized" devices, I dug up my old iPod (nano 3rd gen) the other day, and it's actually really damn convenient now that I realize it. Having a music player device with its own battery, separate from my phone's, own internal storage, everything is just separated out with its dedicated interface as a dedicated device..it's actually not bad at all.
(Yes I know no FLAC but neither my car speakers nor portable non-monitor headphones can really show FLAC-vs-V2)

  No.35899

>>35874
>Also what the fuarrrk is wrong with x86 architecture in general (count out such "neat" bonuses like AMT/PSP and other soykaf, found in modern implementation, this is out os scope, and not the part of ISA itself), and why would you want computers to execute Lisp directly?
You can't build this on x86: http://www.loper-os.org/?p=284