[ art / civ / cult / cyb / diy / drg / feels / layer / lit / λ / q / r / sci / sec / tech / w / zzz ] archive provided by lainchan.jp

lainchan archive - /λ/ - 12467

File: 1449188479455-0.png (251.53 KB, 300x197, underwater_room.jpg)

File: 1449188479456-1.png (415.31 KB, 300x169, technoshaman.jpg)


Most of us have heard how amazing the Lisp machines of the 80s were, an operating system dedicated entirely to human-computer interaction and abstraction.

What I've noticed that myself and other lisp hackers end up doing is living out of an CL+Emacs ecosystem, bringing environment variables into the runtime from async shell calls, SBCL native handles etc, into S-expressions or M-x commands which turns these hooks and system objects into objects in our lisp environments.

I do this with my browsers as well, bringing DOM objects in from javascript compiled to from ParenScript Lisp (Common Lisp -> Javascript)

This is all fine and well until you want to run your system on another machine or your computer crashes and you find yourself setting up your desktop from scratch.

I propose we collaborate on abstracting Linux system hooks, various windowing environment utilities, and any command line programs we use into the SBCL runtime, creating a portable Lisp virtual machine with the possibility of bootstrapping an even more powerful version of Emacs from the result.

Macros can then be created at a system level, we can even bind statistical data on command usage and create command topologies to speed up personal automation or even collaborate on distributed machine learning/AI systems.


File: 1449189192267.png (413.43 KB, 200x200, neuron.jpg)


My first question is, knowing this type of system is possible, what are some parts of web or native runtime that'd you would like to see pulled into Lisp execution?

Is there a good command line windowing tool for manipulating windows in X?

How can I/we hook X keyboard/mouse input through Emacs/SBCL? How do you bind X window commands or listeners from an external program?


>Is there a good command line windowing tool for manipulating windows in X?
not sure but you have a whole window manager written in common lisp so I'd say you could use at least part of the code



this goes against the my point virtualization; that library may not have all the functionality of other WMs or may not even see support in a few years, and new adopters of a Lisp virtual machine may wish to reuse utilities they are already familiar with or that their current desktop setup depends on, abstracting out commonly used command line utilities would also encourage more people to use a lisp virtual machine than having to completely learn a new system


Wouldn't CLX work for that?http://www.cliki.net/CLX

Also I am interested in the project but sadly I don't know where to start. Maybe an init? After al, it IS init that starts userspace.
I don't know what your ideas are, off the top of my head, a sort of kernel interface that takes sexprs and does all syscalls. It ciuld sort of divide the userspace into a C portion, where libraries get loaded as well as C-based end user applications, and the less nice (in the unix sense) sbcl environment which does the rest with the resources taken directly from the kernel and the low level interactions with the C portion


Of course it'd be greatmif (for simplicity) we could do witout C entirely in the first stages, so we just get a kernel+sbcl. We could the go by layers and have, for starters, a simple syscall-lisp. On top of that we can make just about anything, let me look at the dependencies for sbcl and if it can be linked with static libraries...



I think a great way we can start this is by an SBCL+Emacs installer init .el+.lisp bundle (first workflow is open emacs load an .el file downloaded from the internet to install required packages/files)

>C portion

this would be excellent for expanding eventual cross-platform Emacs kernel (as a fork or as a package-like thing?) functionality as well and should be organized accordingly

Repo here folks



all this can be done, there needs to be a concern for a formalized system of options for choosing configurations during installation; some may not want native call permissions and wish to use their own management programs, thus everything needs to be abstracted out appropriately by some system given a base SBCL+emacs installation (I exclude linux as a given because we may wish to even provide utilities for windows client platforms, I like to create portable installations by usb to use on random computers sometimes)

C libs need to not be leaky, the user should not have to hunt the internet to find drivers, dlls, libs; the system should assume nothing until the user chooses something and things are installed to completion

utilities as lisp objects should be tagged by manual functionality as assigned (assignments may be configured); ie windowing utilities that you use to manipulate the desktop should be tagged in a category; our tags for this organization should be flexible, subject to change with regard for linguistic relativity, peripheral device use, workflows ec etc

also importantly:

common lisp should be the preferred runtime/language for the bulk of work, emacs lisp should /only/ be used for binds to buffers with async io, macros, etc


other things;

S-exp/macro evaluation should be recorded to a relational database

I'm thinking PostgreSQL? DB Management will be with CL


File: 1449204902640.png (4.86 MB, 200x134, mycelium.gif)

I tried this before with much less forethought and ended up with a hunk of lisp spaghetti laying around, however I think this can really go far if we focus on providing a singular interface and robust installation procedures

so the logical first step for runtime functionality is hooking X keyboard input: at any point while mycelium is running you should be able to run commands (M-x lookups, emacs keybinds) without having to be focused on emacs

I'll start by hooking CLX to emacs, however I think people who are interested in contributing for now can just submit command line functions they like to hook in lists that we can compile and analyze to categorize


Here's the issue with all systems like this: UNIX is so poorly designed that you can't abstract over it like this.

Anyways, there's already a way to just use Emacs as the init and only user interface:

Of course, Emacs Lisp is actually pretty gross compared to Common Lisp, but you have to understand that is entirely because Emacs is nice to use on UNIX, because Emacs has had to adapt to UNIX.

All of those niceties that make Emacs nice to use are what you get when you take a Lisp and specially craft it for a UNIX, because UNIX is broken beyond any hope for repair.


So you want dmd, guix, scsh and emacs-guile but rewritten in Common Lisp?


The more I think about this, the more it seems to me we're trying to make a lisp-systemd. No, I'm not trying to insult anyone. I think that's the closest to a full lisp machine we can emulate on a linux kernel. A daemon which runs as PID 1 as root and sts up services, and spawns a client which is essentially a lispy shell.Now the clients are full lisp runtimes and the server manages users and resource distribution on userspace acting as a middleman between kernel and user


This is actually sounds like a pretty good system design, especially when implementation matures.


File: 1449251635447.png (468.83 KB, 162x200, 1447001894806.png)

I want my BASH to look like this
how do i aquarium?




that's not what this is trying to accomplish

nothing has to be pulled out into userspace until the user wants so; most every lisp hacker that stays in linux land does this but without collaboration with others to tame improper code abstraction; anyone wishing to build a robust cross-platform networked virtual machine in linux using lisp would have to do this anyway

this is entirely about providing base emacs macro-ability to whatever you want to pull from userspace into lisp-land


I set my emacs background alpha to a lower value and throw up a video/WebGL instance from a browser tab which I can manipulate from kommissar



that isn't what this is trying to accomplish; this idea here is an emacs+SBCL runtime with facilities to virtualize what most people like to manipulate or configure over the command line on whatever operating system stack they use


Why SBCL instead of ECL? Mashing together C and Common Lisp is trivial in ECL and I gather that's what you want to do ITT.


For manipulating X windows I know of wmutils, but I never used it myself.



I can't emphasize more:

writing more native code for is own sake is not desirable here, the primary point of this is to abstract out command line utils and scripts people already use

>SBCL instead of ECL

because SBCL is overwhelmingly standard and using anything less known on an already semi-obscure language is unwise



for example

I would encourage posters to upload pastebins or submit git requests with lisp snippets like this (language is arbitrary for now, it would be nice to be collecting the properly abstracted utilities we use)


(defun screen-capture () (interactive)
(async-shell-command (concat
"byzanz-record "
"~/mycelium.gif " )
"--duration=5 --x=0 --y=60 "
"--width=750 --height=500 "
"--delay=2 ")))



other cool stuff you can do with a WebGL instance is creating animations or gradients across highlights, mouse movements, etc coordinated with the browser behind your emacs instance


>Who's tired of forgetting command line parameters for badly named system utilities?
With an example like screen-capture, you're really just exchanging UNIX commands for Lisp functions that either have less options or just as confusing options.

The only advantage I can see to making a Lisp function wrapper around every UNIX command would use the fact that Lisp functions have their lambda lists remembered by the system, which would mean it would be trivial to have better documentation built-in.

UNIX, of course, makes such automatic documentation impossible for every program, as UNIX does with several other useful features.



>the only advantage

many people don't like playing gymnastics with their windowing system and console stuff, the ability to create custom languages from abstracted functions, in lisp, is much more than just that


>the ability to create custom languages from abstracted functions, in lisp, is much more than just that
I'm not saying it's not, but that's what it feels like when you suggest trying to put Lisp on a UNIX, like trying to put makeup on a pig.


>Who's tired of forgetting command line parameters for badly named system utilities?
Though this might be a little off-topic, is there anything like eldoc-mode for the shells? Preferably bash or zsh.



>like trying to put makeup on a pig

linux is more insanely portable and socially more powerful than any toy lisp machine you can crank out, and if we wanted to bootstrap a purely C-lisp mount backend, sure, but I care less for what you read about on loper-os than my productivity and dicking around with drivers is not my or everyone's personal priority in their work


Not that I'm aware of, but you could probably do something to have completion options visible without having to hit tab. Programs with good completion will show documentation for flags when hitting tab in zsh.

Why would you do this instead of just making an alias or function in bash/zsh...? (also... byzanz-record....)


Nah, to do this properly you have to do something like Singularity and have *everything* handled in a Lisp VM. https://en.wikipedia.org/wiki/Singularity_(operating_system)


ye that's what I was saying, but the OP seems to want something like library bindings or fuarrrk knows what.
He don't even deliver a proper specification so nobody really understads what he's up to. Not even sure he knows what he wants himself.



I doubt many have tried and even fewer with collaborative effort

>He don't even deliver a proper specification so nobody really understads what he's up to.

nor have I promised one beyond my core proposal which should be quite clear

you can complain or contribute ideas, I want to hear what everyone who is interested in this kind of concept has to say on it and am in no rush during this time of the year to cater to impatience on this


a revised proposal:

run linux on a foss virtual machine with kernel/machine debugging hooks, set breakpoints in the virtual machine code to hook functions with lisp to control from the host machine and do similiar stuff with state-tracking and statistical analysis on stuff like peripheral input through the code of the VM itself; making stuff like graphical writes or keyboard state trivial and bootstrapping an OS/hypervisor that way

any thoughts on this?


I like the idea, that way you can experiment and mess up with a VM. I might try it for my project


File: 1450766209007.png (32.75 KB, 200x150, belt_pants.jpg)

VirtualBox has a COM API


I'm reading over it now, if it can't provide the level of access needed the next step with be working on a better REPL in VirtualBox because recompiling this soykaf that often sounds like a nightmare


File: 1451256209748.png (27.43 KB, 200x170, qq.jpg)

VirtualBox's compilation requirements and code organization horrify...don't know what else what was to be expected from oracle

we're going with QEMU

first little hack will be writing straight to video memory from emacs...


File: 1452206924499-0.png (502.97 KB, 200x75, qemu_spelunking.png)

File: 1452206924499-1.png (119.42 KB, 200x200, qemu-code-overview.pdf)

been setting up an emacs environment to explore the hypervisor's guts

QEMU has JSON-style API called the QMP Monitor and a repl shell interface (HMP) with a common set of commands, they're only good for some simple diagnostics or small things like sending key strokes or adding/remove devices

its best to open the HMP in emacs from a shell with the "-monitor stdio" flags added to your qemu start string, and "-s -S" to enable more thorough access with remote gdb debugging of qemu in another emacs buffer

"qemu-system-i386-monitor stdio -s -S"

opens a blank qemu instance

then in gdb

"target remote:1234" should connect gdb to the paused qemu runime

this may trigger some warning about non-stopped remote access in which you can disable this by (customize-option)'ing 'gdb-non-stop-setting off.

currently mapping out where and how qemu lays out its memory for simulated devices and ports


File: 1455384076981.png (51.13 KB, 160x200, 04September_1.jpg)

a shell written in common lisp


File: 1460461989034.png (313.51 KB, 200x113, revy Black Lagoon.jpg)

Here's the new link to the Common Lisp shell:


File: 1463447749518.png (21.75 KB, 160x200, 1457746163542.jpg)

Just a question for anyone:
Why LISP over any other language like C? As far as I know LISP isn't as low-level as C, thus it isn't an efficient language to use for drivers or any thing involving hardware.


Well there's your answer right there
If you're going to write device drivers do it in C.
If you are going to write something that isn't directly hardware-related you can then consider using CL


>Why LISP over any other language like C? As far as I know LISP isn't as low-level as C, thus it isn't an efficient language to use for drivers or any thing involving hardware.
There have been talks of putting a language like Lua in the NetBSD kernel for device drivers.

Lisp is what is called a wide language. It's suitable for both high level and low level programs. It would take some massaging, but there's no reason why specific memory locations and hardware couldn't be controlled by a Lisp program. Even languages like C require some assembler code to truly access low levels at the degree required.


If you wound up with spaghetti you need to document your code more.
When coding you take decisions based on the problems you face, but if you fail to document them you will forget why you chose to do what you did.


Is there an alternative to UNIX?
Why is it you think UNIX is such shit?


Your core proposal is an idea. You need a proper layount of how you will accomplish what you want to accomplish, if you wish for your project to live more than 2 months.


>Is there an alternative to UNIX?
Every other operating system is an alternative.

>Why is it you think UNIX is such shit?

Most people using UNIX treat it like the pinnacle of efficiency, when it is very far from it.
The shell is awful and very inefficient. Imagine if calling functions in, say, C were like calling a UNIX program. Imagine if they constantly had to serialize and deserialize data to send it back and forth.

UNIX makes sense if you're using a weak computer from the 1970s. Now we just emulate weak hardware from the 1970s to run UNIX.

It's insane.


What do you suggest we use instead? Do you seriously expect us to deal with Windows so we don't have to use a Unix or Unix-like OS like OS X or Linux?


File: 1463701178146.png (324.27 KB, 200x125, calm.jpg)

I've for a long time wanted to get into LISP as it sounds very promising, but I haven't found an implementation which works for me.

How do you set up your emacs for working with lisp? How did you get started?


File: 1463751856949.png (1.29 MB, 148x200, 8c2048b6a080fa3f7ce1ee77db32618e34210700.jpg)


Careful. I agree in general but you might be infected with a strain of smug lisp weenie right now. It'll be more useful and less of a hassle to work with Unix/Linux unless there are lainons who have the expertise/organization to create something like Mezzano. Though in that case, it might be better to just help work on that instead.


>deal with Windows

I'm stopping by to say that I develop with Common Lisp on Windows and am writing/deploying an application to users with Windows. Unless you're willing to put up with all the bullshit that comes your way like me, don't even joke about it. It's mostly alright because Emacs is cross-platform (I'm even writing this post in Emacs). Though during my time, I've come across annoying bugs that can only be encountered on Windows and have had to fix a few myself. I'm probably the rare user on this board who continues to develop on Windows like this and doesn't give a single fuck about it. Rest easy at night knowing that even if by chance that you happen to be insane, you're not insane enough to use Windows. Though if you're interested, there's always more room for people to join the nut house because it's always nice to have some more company.


If you'd like to get started then just install Steel Bank Common Lisp (SBCL). To quickly setup Emacs, grab SLIME and Paredit because they are "must have" additions for developing with Common Lisp. Their respective homepages will have details on how to set them up. If you already know programming, this will be a great resource to jump start you into it: http://www.gigamonkeys.com/book/ Otherwise you might like "The Land of Lisp" or "A Gentle Introduction to Symbolic Programming". Make sure to go over the builtin Emacs tutorial since knowing the keybindings will help a lot but don't feel like you need to know all of them at once. There's always the Lisp General if you need more help.

Anyways, pic unrelated. I just like it because there's a silly animu girl with a dog behind her and it makes me smile. They look so silly hehehe. がー!


File: 1464349950041.png (59.03 KB, 200x162, 1413321200768.gif)



search for "Pros and Cons of Suns" without the quotes


for the lazy:

Date: Fri, 27 Feb 87 21:39:24 EST
From: John Rose
To: sun-users, systems

Pros and Cons of Suns

Well, I’ve got a spare minute here, because my Sun’s editor window
evaporated in front of my eyes, taking with it a day’s worth of Emacs

So, the question naturally arises, what’s good and bad about Suns?
This is the fifth day I’ve used a Sun. Coincidentally, it’s also the fifth
time my Emacs has given up the ghost. So I think I’m getting a feel
for what’s good about Suns.

One neat thing about Suns is that they really boot fast. You ought to
see one boot, if you haven’t already. It’s inspiring to those of us
whose LispMs take all morning to boot.

Another nice thing about Suns is their simplicity. You know how a
LispM is always jumping into that awful, hairy debugger with the
confusing backtrace display, and expecting you to tell it how to pro-
ceed? Well, Suns ALWAYS know how to proceed. They dump a
core file and kill the offending process. What could be easier? If
there’s a window involved, it closes right up. (Did I feel a draft?)
This simplicity greatly decreases debugging time because you imme-
diately give up all hope of finding the problem, and just restart from
the beginning whatever complex task you were up to. In fact, at this
point, you can just boot. Go ahead, it’s fast!

One reason Suns boot fast is that they boot less. When a LispM loads
code into its memory, it loads a lot of debugging information too. For
example, each function records the names of its arguments and local
variables, the names of all macros expanded to produce its code, doc-
umentation strings, and sometimes an interpreted definition, just for
good measure.

Oh, each function also remembers which file it was defined in. You
have no idea how useful this is: there’s an editor command called
“meta-point” that immediately transfers you to the source of any
function, without breaking your stride. ANY function, not just one of
a special predetermined set. Likewise, there’s a key that causes the
calling sequence of a function to be displayed instantly.

Logged into a Sun for the last few days, my Meta-Point reflex has
continued unabated, but it is completely frustrated. The program that
I am working on has about 80 files. If I want to edit the code of a
function Foo, I have to switch to a shell window and grep for named
Foo in various files. Then I have to type in the name of the appropri-
ate file. Then I have to correct my spelling error. Finally I have to
search inside the file. What used to take five seconds now takes a
minute or two. (But what’s an order of magnitude between friends?)
By this time, I really want to see the Sun at its best, so I’m tempted to
boot it a couple of times.

There’s a wonderful Unix command called “strip,” with which you
force programs to remove all their debugging information. Unix pro-
grams (such as the Sun window system) are stripped as a matter of
course, because all the debugging information takes up disk space
and slows down the booting process. This means you can’t use the
debugger on them. But that’s no loss; have you seen the Unix debug-
ger? Really.

Did you know that all the standard Sun window applications
(“tools”) are really one massive 3/4 megabyte binary? This allows
the tools to share code (there’s a lot of code in there). Lisp Machines
share code this way, too. Isn’t it nice that our workstations protect
our memory investments by sharing code.



None of the standard Sun window applications (“tools”) support
Emacs. Unix applications cannot be patched either; you must have
the source so you can patch THAT, and then regenerate the applica-
tion from the source.

But I sure wanted my Sun’s mouse to talk to Emacs. So I got a cou-
ple hundred lines of code (from GNU source) to compile, and link
with the very same code that is shared by all the standard Sun win-
dow applications (“tools”). Presto! Emacs gets mice! Just like the
LispM; I remember similar hacks to the LispM terminal program to
make it work with Emacs. It took about 20 lines of Lisp code. (It also
took less work than those aforementioned couple hundred lines of
code, but what’s an order of magnitude between friends?)

Ok, so I run my Emacs-with-mice program, happily mousing away.
Pretty soon Emacs starts to say things like “Memory exhausted” and
“Segmentation violation, core dumped.” The little Unix console is
consoling itself with messages like “clntudp_create: out of memory.”
Eventually my Emacs window decides it’s time to close up for the

What has happened? Two things, apparently. One is that when I cre-
ated my custom patch to the window system, to send mouse clicks to
Emacs, I created another massive 3/4 megabyte binary, which
doesn’t share space with the standard Sun window applications

This means that instead of one huge mass of shared object code run-
ning the window system, and taking up space on my paging disk, I
had two such huge masses, identical except for a few pages of code.
So I paid a megabyte of swap space for the privilege of using a
mouse with my editor. (Emacs itself is a third large mass.)



The Sun kernel was just plain running out of room. Every trivial hack
you make to the window system replicates the entire window system.
But that’s not all: Apparently there are other behemoths of the swap
volume. There are some network things with truly stupendous-sized
data segments. Moreover, they grow over time, eventually taking
over the entire swap volume, I suppose. So you can’t leave a Sun up
for very long. That’s why I’m glad Suns are easy to boot!

But why should a network server grow over time? You’ve got to
realize that the Sun software dynamically allocates very complex
data structures. You are supposed to call “free” on every structure
you have allocated, but it’s understandable that a little garbage
escapes now and then because of programmer oversight. Or pro-
grammer apathy. So eventually the swap volume fills up! This leads
me to daydream about a workstation architecture optimized for the
creation and manipulation of large, complex, interconnected data
structures, and some magic means of freeing storage without pro-
grammer intervention. Such a workstation could stay up for days,
reclaiming its own garbage, without need for costly booting opera-

But, of course, Suns are very good at booting! So good, they some-
times spontaneously boot, just to let you know they’re in peak form!

Well, the console just complained about the lack of memory again.
Gosh, there isn’t time to talk about the other LispM features I’ve
been free of for the last week. Such as incremental recompilation and
loading. Or incremental testing of programs, from a Lisp Listener. Or
a window system you can actually teach new things (I miss my
mouse-sensitive Lisp forms). Or safe tagged architecture that rigidly
distinguishes between pointers and integers. Or the Control-Meta-
Suspend key. Or manuals.

Time to boot!



Basically the very fundamental framework of UNIX is shit and should be avoided at all costs.


also thought this was interesting:

History of the Plague

The roots of the Unix plague go back to the 1960s, when American
Telephone and Telegraph, General Electric, and the Massachusetts Institute
of Technology embarked on a project to develop a new kind of computer
system called an “information utility.” Heavily funded by the Department
of Defense’s Advanced Research Projects Agency (then known as ARPA),
the idea was to develop a single computer system that would be as reliable
as an electrical power plant: providing nonstop computational resources to
hundreds or thousands of people. The information utility would be
equipped with redundant central processor units, memory banks, and input/
output processors, so that one could be serviced while others remained
running. The system was designed to have the highest level of computer
security, so that the actions of one user could not affect another. Its goal
was even there in its name: Multics, short for MULTiplexed Information
and Computer System.

Multics was designed to store and retrieve large data sets, to be used by
many different people at once, and to help them communicate. It likewise
protected its users from external attack as well. It was built like a tank.
Using Multics felt like driving one.

The Multics project eventually achieved all of its goals. But in 1969, the
project was behind schedule and AT&T got cold feet: it pulled the plug on
its participation, leaving three of its researchers—Ken Thompson, Dennis
Ritchie, and Joseph Ossanna—with some unexpected time on their hands.
After the programmers tried unsuccessfully to get management to purchase
a DEC System 10 (a powerful timesharing computer with a sophisticated,
interactive operating system), Thompson and his friends retired to writing
(and playing) a game called Space Travel on a PDP-7 computer that was
sitting unused in a corner of their laboratory.

At first, Thompson used Bell Labs’ GE645 to cross-compile the Space
Travel program for the PDP-7. But soon—rationalizing that it would be
faster to write an operating system for the PDP-7 than developing Space
War on the comfortable environment of the GE645—Thompson had written an assembler,
file system, and minimal kernel for the PDP-7. All to play Space Travel.
Thus Unix was brewed.

Like scientists working on germ warfare weapons (another ARPA-funded
project from the same time period), the early Unix researchers didn’t real-
ize the full implications of their actions. But unlike the germ warfare exper-
imenters, Thompson and Ritchie had no protection. Indeed, rather than
practice containment, they saw their role as an evangelizers. Thompson
and company innocently wrote a few pages they called documentation, and
then they actually started sending it out.

At first, the Unix infection was restricted to a few select groups inside Bell
Labs. As it happened, the Lab’s patent office needed a system for text pro-
cessing. They bought a PDP-11/20 (by then Unix had mutated and spread
to a second host) and became the first willing victims of the strain. By
1973, Unix had spread to 25 different systems within the research lab, and
AT&T was forced to create the Unix Systems Group for internal support.
Researchers at Columbia University learned of Unix and contacted Ritchie
for a copy. Before anybody realized what was happening, Unix had


Literature avers that Unix succeeded because of its technical superiority.
This is not true. Unix was evolutionarily superior to its competitors, but not
technically superior. Unix became a commercial success because it
was a virus. Its sole evolutionary advantage was its small size, simple
design, and resulting portability. Later it became popular and commercially
successful because it piggy-backed on three very successful hosts: the
PDP-11, the VAX, and Sun workstations. (The Sun was in fact designed to be a virus vector.)


As one DEC employee put it:

From: CLOSET::E::PETER 29-SEP-1989 09:43:26.63
To: closet::t_parmenter
Subj: Unix

In a previous job selling Lisp Machines, I was often asked about
Unix. If the audience was not mixed gender, I would sometimes
compare Unix to herpes—lots of people have it, nobody wants it,
they got screwed when they got it, and if they could, they would get
rid of it. There would be smiles, heads would nod, and that would
usually end the discussion about Unix.


Of the at least 20 commercial workstation manufacturers that sprouted or
already existed at the time (late 1970s to early 1980s), only a handful—
Digital, Apollo, Symbolics, HP—resisted Unix. By 1993, Symbolics was
in Chapter 11 and Apollo had been purchased (by HP). The remaining
companies are now firmly committed to Unix.

Accumulation of Random Genetic Material

Chromosomes accumulate random genetic material; this material gets hap-
pily and haphazardly copied and passed down the generations. Once the
human genome is fully mapped, we may discover that only a few percent
of it actually describes functioning humans; the rest describes orangutans,
new mutants, televangelists, and used computer sellers.

The same is true of Unix. Despite its small beginnings, Unix accumulated
junk genomes at a tremendous pace. For example, it’s hard to find a ver-
sion of Unix that doesn’t contain drivers for a Linotronic or Imagen type-
setter, even though few Unix users even know what these machines look
like. As Olin Shivers observes, the original evolutionary pressures on Unix
have been relaxed, and the strain has gone wild.


Wed, 10 Apr 91 08:31:33 EDT
From: Olin Shivers <shivers@bronto.soar.cs.cmu.edu>
Subject: Unix evolution

I was giving some thought to the general evolution (I use the term
loosely, here) of Unix since its inception at Bell Labs, and I think it
could be described as follows.

In the early PDP-11 days, Unix programs had the following design

Rule 1. It didn’t have to be good, or even correct,


Rule 2. It had to be small.

Thus the toolkit approach, and so forth.

Of course, over time, computer hardware has become progressively
more powerful: processors speed up, address spaces move from 16 to
32 bits, memory gets cheaper, and so forth.

So Rule 2 has been relaxed.



The additional genetic material continues to mutate as the virus spreads. It
really doesn’t matter how the genes got there; they are dutifully copied
from generation to generation, with second and third cousins resembling
each other about as much as Woody Allen resembles Michael Jordan. This
behavior has been noted in several books. For example, Section 15.3,
“Routing Information Protocol (RIP),” page 183, of an excellent book on
networking called Internetworking with TCP/IP by Douglas Comer,
describes how inferior genes survive and mutate in Unix’s network code
(paragraph 3):


Despite minor improvements over its predecessors, the popularity of
RIP as an IGP does not arise from its technical merits. Instead, it has
resulted because Berkeley distributed
routed software along with the popular 4.X BSD UNIX systems. Thus, many
Internet sites adopted and installed routed and started using RIP without
even considering its technical merits or limitations.


The next paragraph goes on to say:


Perhaps the most startling fact about RIP is that it was built and
widely distributed with no formal standard. Most implementations
have been derived from the Berkeley code, with interoperability lim-
ited by the programmer’s understanding of undocumented details
and subtleties. As new versions appear, more problems arise.


Like a classics radio station whose play list spans decades, Unix simulta-
neously exhibits its mixed and dated heritage. There’s Clash-era graphics
interfaces; Beatles-era two-letter command names; and systems programs
(for example, ps) whose terse and obscure output was designed for slow
teletypes; Bing Crosby-era command editing (# and @ are still the default line editing
commands), and Scott Joplin-era core dumps.

Others have noticed that Unix is evolutionarily superior to its competition, rather
than technically superior. Richard P. Gabriel, in his essay “The Riseof Worse-is-Better,”
expounds on this theme (see Appendix A). His thesis is that the Unix design philosophy
requires that all design decisions err on
the side of implementation simplicity, and not on the side of correctness,
consistency, or completeness. He calls this the “Worse Is Better” philoso-
phy and shows how it yields programs that are technically inferior to pro-
grams designed where correctness and consistency are paramount, but that are evolutionarily
superior because they port more easily. Just like a virus. There’s nothing elegant about
viruses, but they are very successful. You will probably die from one, in fact.

A comforting thought.



Good read. Quite funny, too.

Polite sage.


>completely learn a new system
Say Widows to Linux?


Hey, thank you for your advice.
I didn't thank you before because I tend to procrastinate.
I like your dog and girl too.


>making your own virtual paradise in a land of soykafty OS APIs and programming languages

I can definitely understand the feel


File: 1476428791462.png (4.99 MB, 200x200, Craig A. Finseth -- The Craft of Text Editing Emacs for the Modern World.pdf)

I suggest we just make a simple editor in CL and then grow it handle commandline tasks. Here is a book on how to write an emacs-like editor.


Can I have a list of known implementations of this concept? I'd like to study how they work.


I know this is an old post but
>Now we just emulate weak hardware from the 1970s to run UNIX.
This seems to be on of the big memes in lainchan yet nobody cares to explain.
What do you mean by this? I'm curious because the only 70's hardware I can think of that is being emulated today is vt100+ terminals


>This seems to be on of the big memes in lainchan yet nobody cares to explain.
I use Lainchan a great deal, mostly /λ/. But you should be able to tell this with how quickly I've responded.
>What do you mean by this? I'm curious because the only 70's hardware I can think of that is being emulated today is vt100+ terminals
Virtual terminals are precisely what I mean.


>But you should be able to tell this with how quickly I've responded.
>Virtual terminals are precisely what I mean.
Oh, I thought it was something important, nevermind then


I'm glad someone bumped this thread, forgot about it for a while. There were some people talking about creating a whole new environment, instead of using *nix, I've been working on-and-off on a x86 lisp compiler, and I was thinking of trying the whole lispOS thing once the compiler is finished. I've used genera and tried out Mezzano, and I love the way they work. I'd be very interested in hearing ideas on how such an environment would work, particularly in areas like the lisp listener, there's a lot of room to implement neat suff there.


I'm saying that this is mostly my opinion, but I post here more than most others seem to, which makes it seem like more people have the opinion.
>Oh, I thought it was something important, nevermind then
If you don't see an issue with emulating old hardware on new hardware to run a supposedly useful and modern operating system, then that's your issue.
I could point out that C has helped cripple processor design, but you should already know this.


>If you don't see an issue with emulating old hardware on new hardware to run a supposedly useful and modern operating system
indeed, it seems to me a rather inane issue, even if it is indeed rathed stupid to still emulate vt100s in year of the lord. However, terminals are for passing around text, mostly. That's just how unix rolls imo
>I could point out that C has helped cripple processor design
now this is more interesting, and I don't really know what you mean there.


>indeed, it seems to me a rather inane issue, even if it is indeed rathed stupid to still emulate vt100s in year of the lord. However, terminals are for passing around text, mostly. That's just how unix rolls imo
I suppose UNIX's lack of support for video screens is also noteworthy. I mean that only X understands this, so anything using it must use X.
>now this is more interesting, and I don't really know what you mean there.
The knowledge that C compilers didn't make use of certain instructions led to such instructions being removed in newer processor types. I believe this is mentioned with the origin of SPARC, in particular.
I could also point out processor types that had modifiable microcode dying out, in part because such is rather useless for a language such as C.


>I suppose UNIX's lack of support for video screens is also noteworthy.
not inherent to unix
>certain instructions
>modifiable microcode
meh, doesn't sound too exciting


>now this is more interesting, and I don't really know what you mean there.
Machines where each node in a linked list could be represented by a single word used to exist.


File: 1477463323466.png (14.04 KB, 200x63, tim_teitelbaum.png)

any lainons want to meet on rizon irc #lainos around 8pm CST every day to discuss this concept as well as general operating system development?

I'll probably be on sporadically the rest of the day as well, if not all day.


I sincerely doubt the speed at which you are operating at on a daily basis or aspire to would suffer from a. implementing that architecture as a VM on a x86 or b. hacking around with x86 SIMD and other available opcodes to make something similiar


I'm in the process of writing a lisp environment for the fictional DCPU-16.

Long-term, I hope to take the lessons I learned there and write an implementation of a similar language for my beaglebone.


This is more interesting in that sense. I was expecting the answer would be something about lisp machines, to which I'd say it was the culture that led to their demise. I'd very much like it if we still had lisp machines being produced, it's really a shame.
I also hate the fact that intel has become the dominant architecture for personal computers, while stuff like the Amiga went largely unnoticed because most people want their office environment and don't really care about computing as a form of art.
I'm getting way off topic here, as usual


How do Lisp OSes implement I/O? *nixen have the `everything is a file descriptor` model, which is both convenient and broken. I'd like to see more ideas.

What is Lisp systems programming like?


To contribute to the discussion
I have a few ideas as of to how this could be done.
One such idea is, as I mentioned in another thread, leveraging the linux kernel, but without any GNU or libc whatsoever.. Just a lisp runtime on top of it that's the entire userspace. It would implement the system while accessing the hardware through system calls. It would take advantage of the existing linux kernel but it'd have the disadvantage that it wouldn't use the full power of lisp at every level.
The second idea, though better, would take much more time, and it's of course doing it all in lisp from the ground up. My idea of how this would be done is by first making a low-level lispy language, esentially, a sort of sexpr-c: A lispy DSL to design the layout of memory at the metal level. It would be the 'C' of the system but better designed, without the limitations of C.
I hope to see some input on this, ideally an lisp-throughout system is how I think such a system should be done.


Oh, to add to the second idea, the C ABI could be entirely discarded this way, and an alternative layout could be implemented, that's why the low level language is not itself a lisp compiler but a description of the semantics of the compiler which would be implemented on top of it.


Maybe you can get an idea from the source code of a Common Lisp OS.



Common Lisp uses streams, which are effectively data sinks and sources.


File: 1477505685960.png (423.2 KB, 200x165, human_computers.jpg)


the results of my research on this stuff relate to the quote in my previous pic from CS researcher Tim Teitelbaum who outlined the all the features of a modern Integrated Development Environment in a 1980 paper:

"Programs are not text, they are hierarchical compositions of computational structures and should be edited, executed, and debugged in an environment that consistently acknowledges and reinforces this viewpoint."

Curtis Guy Yarvin said computing is stuck in the 1970s, I say it is stuck in the 1870s: the interaction between scientist and computer still consists of typing equations on a typewriter and handing the paper result off to your secretary for calculation.

Unix/GNU/Linux fails utterly at this: everything in it is text, files containing text, or binaries for passing around text. Even as a text-oriented operating system it fails because the command line and pipes do not inherently categorize text, reuse text, trash uneeded text, or compose new text.

Unix developers even treat program binaries as text: a seasoned linux hacker will proudly show off the feat of opening a binary in a hex editor and tell you what it does from a wall of hex text, Userspace is now just another kernel over a text-based virtual CPU...

...rather than an all inclusive IDE for composing, storing, and reusing patterns of signals, programs, and macros that discourage repetition both on disk and for the user/developer: RSI is not fun.

Our dream OS interface must be a swan song from the keyboard and allow us to build more fluid methods of interaction from its bones. But how?

...cont in next post:



>>19689 here. I'm implementing the entire thing in a JIT-ing lisp, with the compiler written in DCPUB(which is a DCPU-16 analogue of C). There's no kernel under it. And it is a DSL, with special instructions for hardware I/O, but the idea is that you can shadow the special I/O functions for most programs and have a general-purpose language.

I think this works well for embedded purposes but would not work for more complex machines like x86. Basically I think it's not worth it to write drivers for more than a few things, so you should use some sort of microkernel that already has drivers for most things, and is known to be safe(genode comes to mind).


Streams are an elegant abstraction in my opinion, but I'm afraid I don't know enough Common Lisp...

I asked about asynchronous I/O in the `freestanding programs` thread. How does it work in CL?

If one wishes to interoperate with existing software, support for the native ABI should be considered regardless of how Lisp itself works.

By `native ABI` I mean system calls and instruction set. This implies Lisp is not the host OS. If it's implemented as a virtual machine, it can gain access to existing libraries and even host other virtual machines like the CLR and JVM. That's a non-trivial amount of code that can be instantly accessible to Lisp programs.

I don't like any of this legacy stuff but this has the potential to make any new implementation of any programming language instantly useful for solving a wide range of problems. It's a stepping stone towards something greater.


I'd love to have an object visualizer and editor. If code is data, then it could also be used to handle code.


>>19704 again, figured I may as well just use a tripcode and make it easier.

Some advice: If you're writing a lisp and you want it to be fast, don't use a heavyweight language like CL. I started basically with R5RS and stripped out parts that made compilation harder. Now, it's more like a functional language. Variables can be shadowed, but they can't be edited in-place. The stuff that is there can(mostly) be translated directly into machine code, and it's easy to optimize. If you want a CL-like interface for the user, build it on top of the barebones lisp you used at the lower levels, don't build the whole thing out of it.


File: 1477515936186.png (36.51 KB, 200x66, lisp.png)

One of us isn't getting the point of this whole project.

From what I understand, the point of this project is to create a nice environment for Lisp development (presumably, mainly CL).
To use a whole new Lisp to implement the environment would force a re implementation of CL, which, apart from being unnecessary, would remove all the usefulness of the environment, since this new implementation of CL would not be compatible with all other implementations, and, of particular interest due to it's popularity, SBCL.

Basically, the goal is to have a good environment for SBCL to run on top of.


Eh, personally I'm not a big fan of CL.

Anyway, I thought this thread was just about building a full lisp environment in general. I'm considering multiple ways of getting there.

Not to mention, building a lisp on top of another lisp is really easy, especially once you get macros working.


File: 1477546719332.png (155.32 KB, 186x200, lisp-programmers.jpg)


irc log of discussion here


my personal vision is as follows

* composition, storage, and analysis of repeated patterns in S-expressions, Signals, and machine language
* a global symbol table available for retrieval at anytime: with synonymns/translations available for each symbol as well; English words <-> compact Kanji characters <-> Russian words etc
* interaction between user and machine should be the fewest keystrokes used for the most productivity
* interaction (ie, messages, posts to lainchan, as Signals) between common users of the OS on the web should be codified and indexed when possible to reduce keystrokes and encourage a signal oriented paradigm rather than a text one: imagine submitting a bug report in 2 keystrokes: index the location of the error in an AST, then create a signal to a lainchan post in this thread with the trace of the AST at that index

irc discussion was great; hashed out a lot of design ideas and implementations: we've arrived at the decision to start with an CL OpenGL environment over Linux

same time tommorrow 8 CST, though I and possibly other lainons will be on throughout the rest of the day

irc.rizon.net #lainos


you're aware that those are actually chinese characters, right?
But I like the idea, however, lisp is already considered an esoteric language of sorts by many, if you add chinese...... ay mamá
Plus, you need to learn how to input such character from the keyboard as well.
If you're talking about readability through brevity, I think 'call-from-current-continuation' is more readable than 道


using the magic of unicode, it's really not a big deal if your symbols are in Chinese.




it would be through translation tables for symbols, so you could switch between them, and with enough practice, something like 道 takes up less space on a screen than "call-from-current-continuation"

>Plus, you need to learn how to input such character from the keyboard as well.

input is going to be down on a symbol/s-exp basis rather than character by character anyway, there will be an option for the lookup table indices to be visible as a tooltip anyway


another Lainon and I agreed to a core set of features and vision and are currently working on the environment, we're starting with an SBCL+OpenGL instance atop Linux:


there's a roadmap in there and if you're interested in development hop in the irc channel in my subject whenever one of us is around

right now we are writing FFI binds for font rendering from https://github.com/rougier/freetype-gl


I'll be following this.
I prefer scheme but I use stumpwm and would definitely play with this if it comes to life.



found this today


>Scsh is a POSIX API layered on top of the Scheme programming language in a manner to make the most of Scheme's capability for scripting.

seems very barebones but it's become a popular bind on unix in the boston-centered Scheme community