that was a typo X_X. stopped to think and typed the same thing twice.
>No soykaf. Why would your VPS ever be completely shut down?
what happens when the ssh daemon crashes? or the kernel panics? or an update leaves some-kinda-library-error because a running process requests a lib that's been updated?
hard resets are necessary sometimes, or at least physical console access.
>What do you even mean by this? Do you even know what the kernel is? How is it even relevant?
a kernel, in the context of computing, is a bit of code which runs directly on a machine's hardware and provides the controlled environment that the rest of an operating system expects to exist within, with process scheduling and vmem and things. on modern hardware, it's provided with a set of exclusive machine instructions as well, kept off limits to userspace programs at the hardware level via the setting and resetting of a "kernel mode bit".
there are two well-known archetypes for kernel design: micro kernels and monolithic kernels. a micro kernel maintains a minimal footprint, delegating things like drivers / filesystems / whatever to external components, while a monolithic kernel is built as a single unit, with all that extra functionality implemented inside the kernel itself. though the former is widely heralded (by e.g. tannenbaum or the hurd devs), it has yet to see widespread success in practice (unless you count osx, which is a different thing entirely, really).
microkernels provide an added layer of security and stability, as very little code is actually run in kernel mode and misbehaving subcomponents can be reset or killed without the kernel itself dying. however, there is a huge performance overhead involved in this approach, as sub components don't share memory space or kernel privileges and need to spam the kernel with requests via message-passing. this is the primary reason for their lack of popularity, despite all the praise from theorists.
in a monolithic kernel, security and reliability are sacrificed for the sake of speed, via mem sharing, all sorts of weird optimisations, and lots and lots of inline statements, meaning your kernel can run at ludicrous speed but will also explode at the slightest quiver from that binary blob you loaded for your oversized graphics card. it also means that, if you're using a pre-compiled kernel, you're probably going to be filling up your ram with lots of functionality that's completely useless on your specific machine.
the linux kernel tries to make somewhat of a compromise between these two extremes via the concept of kernel modules, extra chunks of the kernel which are tacked onto the main body at run time, decreasing the mem footprint and boosting stability at least slightly via startup checks and some namespace isolation.
so anyways, yeh, that's a kernel. and yes, i mean the kernel is stored outside the VM, as in written to a disk somewhere and loaded into the VM via whatever hypervisor, allowing for space deduplication and deeper integration for metrics and other functionality. it's relevant because i was giving it as an example of the close ties between "inside the VM" and "the control panel" which make accessing the latter necessary to perform certain things which can't be accomplished entirely via the former.