Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 42 |
Nodes: | 6 (0 / 6) |
Uptime: | 01:37:51 |
Calls: | 220 |
Calls today: | 1 |
Files: | 824 |
Messages: | 121,542 |
Posted today: | 6 |
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still Linux
So basically another layer to fail before VMS loads. Wonder why people are not
using the real Alpha or Integrity as cheap as they are
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still Linux
So basically another layer to fail before VMS loads. Wonder why people
are not using the real Alpha or Integrity as cheap as they are
On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still
Linux
And not even using the native KVM virtualization architecture that
is built into Linux.
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still Linux
So basically another layer to fail before VMS loads.
Wonder why people
are not using the real Alpha or Integrity as cheap as they are
ESXi has its own proprietary kernel called VMKernel.
You can probably call it Linux inspired.
Similar file system layout. Compatible API subset (but not full
compatible API). Similar driver architecture. Similar CLI experience
(BusyBox provide the usual CLI interface).
But not based on Linux kernel code. And not fully compatible. And not
all functionality (it is specialized for hypervisor role).
In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still
Linux
And not even using the native KVM virtualization architecture that is
built into Linux.
History: VMware ESXi was released in 2001 and KVM was merged into the
Linux kernel in 2007.
On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote:
In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still
Linux
And not even using the native KVM virtualization architecture that is
built into Linux.
History: VMware ESXi was released in 2001 and KVM was merged into the
Linux kernel in 2007.
In other words, VMware has long been obsoleted by better solutions.
I keep being told that VMWARE is not an OS in itself.Dear David,
But it is... based on Ubuntu Kernel.... stripped down but still Linux
So basically another layer to fail before VMS loads. Wonder why people
are not using the real Alpha or Integrity as cheap as they are
DT
On 2024-11-28, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote:
In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still
Linux
And not even using the native KVM virtualization architecture that is
built into Linux.
History: VMware ESXi was released in 2001 and KVM was merged into the
Linux kernel in 2007.
In other words, VMware has long been obsoleted by better solutions.
Please explain how ESXi is obsolete, and how KVM is a better solution.
Both KVM and ESXi use the processor's VT-d (or AMD's equivalent, AMD-Vi) >extensions on x86 to efficiently handle instructions that require
hypervisor intervention. I'm not sure how you'd judge which one is a
better solution in that regard. So the only thing that matters, really,
is the virtualization of everything other than the processor itself.
KVM is largely dependent on qemu to provide the rest of the actual
virtual system.
qemu's a great project and I run a ton of desktop VMs
with qemu+KVM, but it just doesn't have the level of maturity or
edge-case support that ESXi does. Pretty much any x86 operating system, >historical or current, _just works_ in ESXi. With qemu+KVM, you're
going to have good success with the "big name" OSes...Windows, Linux,
the major BSDs, etc., but you're going to be fighting with quirks and >problems if you're trying, say, old OS/2 releases. That's not relevant
for most people looking for virtualization solutions, and the problems
aren't always insurmountable, but you're claiming that KVM is a "better" >solution, whereas in my experience, in reality, ESXi is the better >technology.
(As an aside, VMWare's _desktop_ [not server] virtualization product,
VMWare Workstation, looks like it's making moves to use KVM under the
hood, but they have said they will continue using their own proprietary >virtual devices and drivers, which is really what sets VMWare apart from >qemu. This is a move they've already made on both the Windows and Mac OS >version of VMWare Workstation if I understand correctly [utilizing
Hyper-V and Apple's Virtualization framework]. This makes sense... as I
said, the underlying virtualization of the processor is being handled by
the VT-x capabilities of the processor whether you're using VMWare, >VirtualBox, KVM, etc., so when running a desktop product under Linux,
you may as well use KVM but you still need other software to build the
rest of the virtual system and its virtual devices, so that's where
VMWare and qemu will still differentiate themselves. None of this is
relevant for ESXi, though, because as has been pointed out earlier in
the thread, it is not running on Linux at all, so VMKernel is providing
its own implementation of, essentially, what KVM provides in the Linux >kernel.)
qemu and KVM have the huge advantage that they are open source and free >software, of course, whereas ESXi (and vCenter) are closed source and >expensive (barring the old no-cost ESXi license).
But ESXi just works. It's solid, it has a huge infrastructure around it
for vSAN stuff, virtual networking management, vMotion "just works," I
find the management interface nicer than, say, Proxmox (although Proxmox
is an impressive product), etc.
It's sad to see Broadcom is going to do everything they can to drive
away the VMWare customer base. VMWare will lose its market-leader
position, FAR fewer people will learn about it and experiment with it
since Broadcom killed the no-cost ESXi licenses, and popularity of
Proxmox is going to skyrocket, I suspect. Which isn't a bad thing --
when open source solutions get attention and traction, they continue to >improve, and as I said earlier, Proxmox is already an impressive product
so I look forward to its future.
But make no mistake: VMWare was -- and I'd say still is -- the gold
standard for virtualization, both on the server (ESXi) and the
workstation (VMWare Workstation). VMWare's downfall at the hands of
Broadcom will 100% be due to Broadcom's business practices, not
technology.
I'm a bit of a free software zealot, yet even I still use ESXi for my
"real" servers. I do look forward to eventually replacing my ESXi boxes
with Proxmox for philosophical reasons, but I'm in no rush.
But ESXi just works. It's solid, it has a huge infrastructure around it
for vSAN stuff, virtual networking management, vMotion "just works," I
find the management interface nicer than, say, Proxmox (although Proxmox
is an impressive product), etc.
It's sad to see Broadcom is going to do everything they can to drive
away the VMWare customer base. VMWare will lose its market-leader
position, FAR fewer people will learn about it and experiment with it
since Broadcom killed the no-cost ESXi licenses, and popularity of
Proxmox is going to skyrocket, I suspect. Which isn't a bad thing --
when open source solutions get attention and traction, they continue to improve, and as I said earlier, Proxmox is already an impressive product
so I look forward to its future.
But make no mistake: VMWare was -- and I'd say still is -- the gold
standard for virtualization, both on the server (ESXi) and the
workstation (VMWare Workstation). VMWare's downfall at the hands of
Broadcom will 100% be due to Broadcom's business practices, not
technology.
But that is not how the enterprise IT world
look today. Today there are 3 possible setups:
1) public cloud
2) on-prem with containers either on bare metal
or on VM in very basic setup (because k8s and
other container stuff provide all the advanced functionality)
3) on-prem with traditional VM's
#1 is not ESXi as the big cloud vendors do not want
to pay and they want to customize. #2 does not need to
be ESXi as no advanced features are needed so any
virtualization is OK and ESXi cost. #3 is all that
is left for ESXi to shine with its advanced features.
Please explain how ESXi is obsolete, and how KVM is a better solution.
KVM is largely dependent on qemu to provide the rest of the actual
virtual system.
qemu's a great project and I run a ton of desktop VMs
with qemu+KVM, but it just doesn't have the level of maturity or
edge-case support that ESXi does.
The fact that Broadcom has had to raise prices tells you all you
need to know about the costs of maintaining proprietary solutions.
On Thu, 28 Nov 2024 08:39:39 -0000 (UTC), Matthew R. Wilson wrote:
Please explain how ESXi is obsolete, and how KVM is a better solution.
KVM is built into the mainline kernel, is the basis of a braod range of virtualization solutions, and has broad support among the Linux community.
The fact that Broadcom has had to raise prices tells you all you need to
know about the costs of maintaining proprietary solutions.
. . . a company which switched from VMware to an open-source
alternative as a result of Broadcom's massive price hikes,
and encountered an unexpected benefit: the resources consumed
by system management overhead on the new product were so much
less, they could run more VMs on the same hardware.
Interesting report <https://arstechnica.com/information-technology/2024/12/company-claims-1000-percent-price-hike-drove-it-from-vmware-to-open-source-rival/>
on a company which switched from VMware to an open-source alternative
as a result of Broadcom’s massive price hikes, and encountered an unexpected benefit: the resources consumed by system management
overhead on the new product were so much less, they could run more VMs
on the same hardware.
On 2024-11-28, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote:
In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still
Linux
And not even using the native KVM virtualization architecture that is
built into Linux.
History: VMware ESXi was released in 2001 and KVM was merged into the
Linux kernel in 2007.
In other words, VMware has long been obsoleted by better solutions.
Please explain how ESXi is obsolete, and how KVM is a better solution.
Both KVM and ESXi use the processor's VT-d (or AMD's equivalent, AMD-Vi) extensions on x86 to efficiently handle instructions that require
hypervisor intervention. I'm not sure how you'd judge which one is a
better solution in that regard. So the only thing that matters, really,
is the virtualization of everything other than the processor itself.
KVM is largely dependent on qemu to provide the rest of the actual
virtual system. qemu's a great project and I run a ton of desktop VMs
with qemu+KVM, but it just doesn't have the level of maturity or
edge-case support that ESXi does. Pretty much any x86 operating system, historical or current, _just works_ in ESXi. With qemu+KVM, you're
going to have good success with the "big name" OSes...Windows, Linux,
the major BSDs, etc., but you're going to be fighting with quirks and problems if you're trying, say, old OS/2 releases. That's not relevant
for most people looking for virtualization solutions, and the problems
aren't always insurmountable, but you're claiming that KVM is a "better" solution, whereas in my experience, in reality, ESXi is the better technology.
(As an aside, VMWare's _desktop_ [not server] virtualization product,
VMWare Workstation, looks like it's making moves to use KVM under the
hood, but they have said they will continue using their own proprietary virtual devices and drivers, which is really what sets VMWare apart from qemu. This is a move they've already made on both the Windows and Mac OS version of VMWare Workstation if I understand correctly [utilizing
Hyper-V and Apple's Virtualization framework]. This makes sense... as I
said, the underlying virtualization of the processor is being handled by
the VT-x capabilities of the processor whether you're using VMWare, VirtualBox, KVM, etc., so when running a desktop product under Linux,
you may as well use KVM but you still need other software to build the
rest of the virtual system and its virtual devices, so that's where
VMWare and qemu will still differentiate themselves. None of this is
relevant for ESXi, though, because as has been pointed out earlier in
the thread, it is not running on Linux at all, so VMKernel is providing
its own implementation of, essentially, what KVM provides in the Linux kernel.)
From what you wrote seem that ESXi is more similar to Xen than to
KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs while in KVM+qemu some (frequently most) programs is running
unvirtualized and only rest is virtualized.
Matthew R. Wilson <mwilson@mattwilson.org> wrote:
KVM is largely dependent on qemu to provide the rest of the actual
virtual system. qemu's a great project and I run a ton of desktop VMs
with qemu+KVM, but it just doesn't have the level of maturity or
edge-case support that ESXi does. Pretty much any x86 operating system,
historical or current, _just works_ in ESXi. With qemu+KVM, you're
going to have good success with the "big name" OSes...Windows, Linux,
the major BSDs, etc., but you're going to be fighting with quirks and
problems if you're trying, say, old OS/2 releases. That's not relevant
for most people looking for virtualization solutions, and the problems
aren't always insurmountable, but you're claiming that KVM is a "better"
solution, whereas in my experience, in reality, ESXi is the better
technology.
What you wrote is now very atypical use: faithfully implementing
all quirks of real devices. More typical case is guest which
knows that it is running on a hypervisor and uses virtual
interface with no real counterpart. For this quality of
virtual interfaces matters. I do not know how ESXi compares
to KVM, but I know that "equivalent" but different virtual
interfaces in qemu+KVM may have markedly different performance.
Matthew R. Wilson <mwilson@mattwilson.org> wrote:
On 2024-11-28, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote: >>>
In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:
I keep being told that VMWARE is not an OS in itself.
But it is... based on Ubuntu Kernel.... stripped down but still
Linux
And not even using the native KVM virtualization architecture that is >>>>> built into Linux.
History: VMware ESXi was released in 2001 and KVM was merged into the
Linux kernel in 2007.
In other words, VMware has long been obsoleted by better solutions.
Please explain how ESXi is obsolete, and how KVM is a better solution.
Both KVM and ESXi use the processor's VT-d (or AMD's equivalent, AMD-Vi)
extensions on x86 to efficiently handle instructions that require
hypervisor intervention. I'm not sure how you'd judge which one is a
better solution in that regard. So the only thing that matters, really,
is the virtualization of everything other than the processor itself.
Little nitpick: virtualization need to handle _some_ system instructions.
But with VT-d and particularly with nested page tables this should
be easy.
KVM is largely dependent on qemu to provide the rest of the actual
virtual system. qemu's a great project and I run a ton of desktop VMs
with qemu+KVM, but it just doesn't have the level of maturity or
edge-case support that ESXi does. Pretty much any x86 operating system,
historical or current, _just works_ in ESXi. With qemu+KVM, you're
going to have good success with the "big name" OSes...Windows, Linux,
the major BSDs, etc., but you're going to be fighting with quirks and
problems if you're trying, say, old OS/2 releases. That's not relevant
for most people looking for virtualization solutions, and the problems
aren't always insurmountable, but you're claiming that KVM is a "better"
solution, whereas in my experience, in reality, ESXi is the better
technology.
What you wrote is now very atypical use: faithfully implementing
all quirks of real devices. More typical case is guest which
knows that it is running on a hypervisor and uses virtual
interface with no real counterpart. For this quality of
virtual interfaces matters. I do not know how ESXi compares
to KVM, but I know that "equivalent" but different virtual
interfaces in qemu+KVM may have markedly different performance.
(As an aside, VMWare's _desktop_ [not server] virtualization product,
VMWare Workstation, looks like it's making moves to use KVM under the
hood, but they have said they will continue using their own proprietary
virtual devices and drivers, which is really what sets VMWare apart from
qemu. This is a move they've already made on both the Windows and Mac OS
version of VMWare Workstation if I understand correctly [utilizing
Hyper-V and Apple's Virtualization framework]. This makes sense... as I
said, the underlying virtualization of the processor is being handled by
the VT-x capabilities of the processor whether you're using VMWare,
VirtualBox, KVM, etc., so when running a desktop product under Linux,
you may as well use KVM but you still need other software to build the
rest of the virtual system and its virtual devices, so that's where
VMWare and qemu will still differentiate themselves. None of this is
relevant for ESXi, though, because as has been pointed out earlier in
the thread, it is not running on Linux at all, so VMKernel is providing
its own implementation of, essentially, what KVM provides in the Linux
kernel.)
From what you wrote seem that ESXi is more similar to Xen than to
KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >while in KVM+qemu some (frequently most) programs is running unvirtualized >and only rest is virtualized. I do not know if this sets limits on quality >of virtualization, but that could be valid reason for ESXi to provide its
own kernel.
On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
From what you wrote seem that ESXi is more similar to Xen than to
KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs
while in KVM+qemu some (frequently most) programs is running
unvirtualized and only rest is virtualized.
I think that dates back to the old distinction between “type 1” and “type
2“ hypervisors. It’s an obsolete distinction nowadays.
On 12/2/2024 10:09 PM, Waldek Hebisch wrote:
Matthew R. Wilson <mwilson@mattwilson.org> wrote:
KVM is largely dependent on qemu to provide the rest of the actual
virtual system. qemu's a great project and I run a ton of desktop VMs
with qemu+KVM, but it just doesn't have the level of maturity or
edge-case support that ESXi does. Pretty much any x86 operating system,
historical or current, _just works_ in ESXi. With qemu+KVM, you're
going to have good success with the "big name" OSes...Windows, Linux,
the major BSDs, etc., but you're going to be fighting with quirks and
problems if you're trying, say, old OS/2 releases. That's not relevant
for most people looking for virtualization solutions, and the problems
aren't always insurmountable, but you're claiming that KVM is a "better" >>> solution, whereas in my experience, in reality, ESXi is the better
technology.
What you wrote is now very atypical use: faithfully implementing
all quirks of real devices. More typical case is guest which
knows that it is running on a hypervisor and uses virtual
interface with no real counterpart. For this quality of
virtual interfaces matters. I do not know how ESXi compares
to KVM, but I know that "equivalent" but different virtual
interfaces in qemu+KVM may have markedly different performance.
Are you talking about paravirtual drivers?
To get back to VMS then I don't think VMS got any of those.
So Goldberg defined two "types" of hypervisor in his
dissertation: Types 1 and 2. Of course, this is an over
simplification, and those of us who work on OSes and hypervisors
understand that these distinctions are blurry and more on a
continuum than hard and fast buckets, but to a first order
approximation these categories are useful.
Roughly, a Type-1 hypervisor is one that runs on the bare metal
and only supports guests; usually some special guest is
designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
examples of Type-1 hypervisors.
Again, roughly, a Type-2 hypervisor is one that runs in the
context of an existing operating system, using its services and implementation for some of its functionality; examples include
KVM (they _say_ it's type 1, but that's really not true) and
PA1050. Usually with a Type-2 HV you've got a userspace program
running under the host operating system that provides control
functionality, device models, and so on. QEMU is an example of
such a thing (sometimes, confusingly, this is called the
hypervisor while the kernel-resident component, is called the
Virtual Machine Monitor, or VMM), but other examples exist:
CrosVM, for instance.
On 11/28/2024 8:24 AM, Dan Cross wrote:
So Goldberg defined two "types" of hypervisor in his
dissertation: Types 1 and 2. Of course, this is an over
simplification, and those of us who work on OSes and hypervisors
understand that these distinctions are blurry and more on a
continuum than hard and fast buckets, but to a first order
approximation these categories are useful.
Roughly, a Type-1 hypervisor is one that runs on the bare metal
and only supports guests; usually some special guest is
designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
examples of Type-1 hypervisors.
Again, roughly, a Type-2 hypervisor is one that runs in the
context of an existing operating system, using its services and
implementation for some of its functionality; examples include
KVM (they _say_ it's type 1, but that's really not true) and
PA1050. Usually with a Type-2 HV you've got a userspace program
running under the host operating system that provides control
functionality, device models, and so on. QEMU is an example of
such a thing (sometimes, confusingly, this is called the
hypervisor while the kernel-resident component, is called the
Virtual Machine Monitor, or VMM), but other examples exist:
CrosVM, for instance.
I think the relevant distinction is that type 1 runs in the
kernel while type 2 runs on the kernel.
KVM runs in Linux not on Linux. Which makes it type 1.
In article <vin597$3sjr$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
From what you wrote seem that ESXi is more similar to Xen than to
KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>>> while in KVM+qemu some (frequently most) programs is running
unvirtualized and only rest is virtualized.
I think that dates back to the old distinction between “type 1” and “type
2“ hypervisors. It’s an obsolete distinction nowadays.
No.
If you look at what is available and what it is used for then you will
see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
No, that has nothing to do with it.
On 12/3/2024 10:36 AM, Dan Cross wrote:
In article <vin597$3sjr$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
From what you wrote seem that ESXi is more similar to Xen than to
KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>>>> while in KVM+qemu some (frequently most) programs is running
unvirtualized and only rest is virtualized.
I think that dates back to the old distinction between “type 1” and “type
2“ hypervisors. It’s an obsolete distinction nowadays.
No.
If you look at what is available and what it is used for then you will
see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
No, that has nothing to do with it.
Yes. It has.
The question was whether the type 1 vs type 2 distinction is obsolete.
The fact that "what is labeled type 1 is used for production and what is >labeled type 2 is used for development" proves that people think it
matters.
So either almost everybody is wrong or it matters.
In article <vin68p$3sjr$4@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 11/28/2024 8:24 AM, Dan Cross wrote:
So Goldberg defined two "types" of hypervisor in his
dissertation: Types 1 and 2. Of course, this is an over
simplification, and those of us who work on OSes and hypervisors
understand that these distinctions are blurry and more on a
continuum than hard and fast buckets, but to a first order
approximation these categories are useful.
Roughly, a Type-1 hypervisor is one that runs on the bare metal
and only supports guests; usually some special guest is
designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
examples of Type-1 hypervisors.
Again, roughly, a Type-2 hypervisor is one that runs in the
context of an existing operating system, using its services and
implementation for some of its functionality; examples include
KVM (they _say_ it's type 1, but that's really not true) and
PA1050. Usually with a Type-2 HV you've got a userspace program
running under the host operating system that provides control
functionality, device models, and so on. QEMU is an example of
such a thing (sometimes, confusingly, this is called the
hypervisor while the kernel-resident component, is called the
Virtual Machine Monitor, or VMM), but other examples exist:
CrosVM, for instance.
I think the relevant distinction is that type 1 runs in the
kernel while type 2 runs on the kernel.
No. They both run in supervisor mode. On x86, this is even
necessary; the instructions to enter guest mode are privileged.
Go back to Goldberg's dissertation; he discusses this at length.
KVM runs in Linux not on Linux. Which makes it type 1.
Nope. KVM is dependent on Linux at this point. The claim that
it is a type-1 hypervisor is predicated on the idea that it was
separable from Linux, but I don't think anyone believes that
anymore.
On 12/3/2024 10:36 AM, Dan Cross wrote:
In article <vin68p$3sjr$4@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 11/28/2024 8:24 AM, Dan Cross wrote:
So Goldberg defined two "types" of hypervisor in his
dissertation: Types 1 and 2. Of course, this is an over
simplification, and those of us who work on OSes and hypervisors
understand that these distinctions are blurry and more on a
continuum than hard and fast buckets, but to a first order
approximation these categories are useful.
Roughly, a Type-1 hypervisor is one that runs on the bare metal
and only supports guests; usually some special guest is
designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
examples of Type-1 hypervisors.
Again, roughly, a Type-2 hypervisor is one that runs in the
context of an existing operating system, using its services and
implementation for some of its functionality; examples include
KVM (they _say_ it's type 1, but that's really not true) and
PA1050. Usually with a Type-2 HV you've got a userspace program
running under the host operating system that provides control
functionality, device models, and so on. QEMU is an example of
such a thing (sometimes, confusingly, this is called the
hypervisor while the kernel-resident component, is called the
Virtual Machine Monitor, or VMM), but other examples exist:
CrosVM, for instance.
I think the relevant distinction is that type 1 runs in the
kernel while type 2 runs on the kernel.
Reinserted:
# If VSI created a hypervisor as part of VMS then if
# it was in SYS$SYSTEM it would be a type 2 while if it
# was in SYS$LOADABLE_IMAGES it would be a type 1.
No. They both run in supervisor mode. On x86, this is even
necessary; the instructions to enter guest mode are privileged.
That code does something that end up bringing the CPU in
privileged mode does not make the code part of the kernel.
To build on the VMS example the hypothetical type 2
hypervisor in SYS$SYSTEM could (if properly authorized)
call SYS$CMKRNL and do whatever. It would not become
part of the VMS kernel from that.
Just like VMWare Player or VirtualBox running on Windows
is not part of the Windows kernel even if they do use CPU
support for virtualization.
Go back to Goldberg's dissertation; he discusses this at length.
KVM runs in Linux not on Linux. Which makes it type 1.
Nope. KVM is dependent on Linux at this point. The claim that
it is a type-1 hypervisor is predicated on the idea that it was
separable from Linux, but I don't think anyone believes that
anymore.
It is the opposite. KVM is type 1 not because it is separable
from Linux but because it is inseparable from Linux.
Nope. KVM is dependent on Linux at this point. The claim that
it is a type-1 hypervisor is predicated on the idea that it was
separable from Linux, but I don't think anyone believes that
anymore.
In article <vina48$3sjr$6@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/3/2024 10:36 AM, Dan Cross wrote:
In article <vin68p$3sjr$4@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
KVM runs in Linux not on Linux. Which makes it type 1.
Nope. KVM is dependent on Linux at this point. The claim that
it is a type-1 hypervisor is predicated on the idea that it was
separable from Linux, but I don't think anyone believes that
anymore.
It is the opposite. KVM is type 1 not because it is separable
from Linux but because it is inseparable from Linux.
Kinda. The claim is that KVM turns Linux+KVM into a type-1
hypervisor; that is, the entire combination becomes a the HV.
That's sort of a silly distinction, though, since the real
differentiator, defined by Goldberg, is whether or not the VMM
makes use of existing system services, which KVM very much does.
In article <vin939$3sjr$5@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/3/2024 10:36 AM, Dan Cross wrote:
In article <vin597$3sjr$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
From what you wrote seem that ESXi is more similar to Xen than to >>>>>> KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>>>>> while in KVM+qemu some (frequently most) programs is running
unvirtualized and only rest is virtualized.
I think that dates back to the old distinction between “type 1” and “type
2“ hypervisors. It’s an obsolete distinction nowadays.
No.
If you look at what is available and what it is used for then you will >>>> see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
No, that has nothing to do with it.
Yes. It has.
The question was whether the type 1 vs type 2 distinction is obsolete.
As I've posted on numerous occasions, at length, citing primary
sources, the distinction is not exact; that doesn't mean that it
is obsolete or useless.
The fact that "what is labeled type 1 is used for production and what is
labeled type 2 is used for development" proves that people think it
matters.
That seems to be something you invented: I can find no serious
reference that suggests that what you wrote is true,
so it is
hard to see how it "proves" anything. KVM is used extensively
in production and is a type-2 hypervisor, for example.
z/VM is
used extensively in production, and claims to be a type-2
hypervisor (even though it more closely resembles a type-1 HV).
On 12/3/2024 11:10 AM, Dan Cross wrote:
In article <vina48$3sjr$6@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/3/2024 10:36 AM, Dan Cross wrote:
In article <vin68p$3sjr$4@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
KVM runs in Linux not on Linux. Which makes it type 1.
Nope. KVM is dependent on Linux at this point. The claim that
it is a type-1 hypervisor is predicated on the idea that it was
separable from Linux, but I don't think anyone believes that
anymore.
It is the opposite. KVM is type 1 not because it is separable
from Linux but because it is inseparable from Linux.
Kinda. The claim is that KVM turns Linux+KVM into a type-1
hypervisor; that is, the entire combination becomes a the HV.
That's sort of a silly distinction, though, since the real
differentiator, defined by Goldberg, is whether or not the VMM
makes use of existing system services, which KVM very much does.
ESXi is basic OS functionality and virtualization services
in a single kernel.
Linux+KVM is basic OS functionality and virtualization services
in a single kernel.
They are logical working the same way.
The differences are not in how they work, but in history
and reusability in other contexts:
* Linux existed before KVM
* Linux has more functionality so it can be and is used without KVM
But type 1 vs type 2 should depend on how it works not on
history and reusability in other contexts.
Nope. KVM is dependent on Linux at this point. The claim that
it is a type-1 hypervisor is predicated on the idea that it was
separable from Linux, but I don't think anyone believes that
anymore.
Well, the Joyent folks moved it to Illumos, so it was at least sorta >separable. And it still works, though the community seems to have
decided that Bhyve is better, so it will probably rot over time.
On 12/3/2024 10:55 AM, Dan Cross wrote:
In article <vin939$3sjr$5@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/3/2024 10:36 AM, Dan Cross wrote:
In article <vin597$3sjr$2@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
From what you wrote seem that ESXi is more similar to Xen than to >>>>>>> KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs
while in KVM+qemu some (frequently most) programs is running
unvirtualized and only rest is virtualized.
I think that dates back to the old distinction between “type 1” and “type
2“ hypervisors. It’s an obsolete distinction nowadays.
No.
If you look at what is available and what it is used for then you will >>>>> see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
No, that has nothing to do with it.
Yes. It has.
The question was whether the type 1 vs type 2 distinction is obsolete.
As I've posted on numerous occasions, at length, citing primary
sources, the distinction is not exact; that doesn't mean that it
is obsolete or useless.
The post I was replying to called it obsolete. So that was the topic
of my post.
The fact that "what is labeled type 1 is used for production and what is >>> labeled type 2 is used for development" proves that people think it
matters.
That seems to be something you invented: I can find no serious
reference that suggests that what you wrote is true,
Is is your experience that people do their development on ESXi/KVM
and run their production on VMWare Player/VirtualBox?
:-)
People do development on VMWare Player/VirtualBox and run
production on ESXi/KVM.
hard to see how it "proves" anything. KVM is used extensively
in production and is a type-2 hypervisor, for example.
When I wrote "is labeled" I am talking about what the
authors and the industry in general are calling it.
In that sense KVM is a labeled a type 1 hypervisor. I can
find Redhat links if you don't believe me.
That you consider it to be type 2 does not really matter.
used extensively in production, and claims to be a type-2
hypervisor (even though it more closely resembles a type-1 HV).
True.
The type 1 for production and type 2 for development does
not hold in the mainframe world.
If you look at what is available and what it is used for then you will
see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
I think the relevant distinction is that type 1 runs in the kernel while
type 2 runs on the kernel.
On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:
If you look at what is available and what it is used for then you will
see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
What people discovered was, they needed to run full-fat system management suites, reporting tools, backup/maintenance tools etc on the hypervisor.
In other words, all the regular filesystem-management functions you need
on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
cut it any more -- virtualization is nowadays done on full-function Linux kernels (all “type 2”).
On 12/3/2024 3:24 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:
If you look at what is available and what it is used for then you will
see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
What people discovered was, they needed to run full-fat system management
suites, reporting tools, backup/maintenance tools etc on the hypervisor.
In other words, all the regular filesystem-management functions you need
on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
cut it any more -- virtualization is nowadays done on full-function Linux
kernels (all “type 2”).
Having a full host OS is very nice for a development system with a few
VM's to build and test various stuff.
It does not scale to a large production environment. For that you need >central management servers.
ESXi has the vSphere suite of products. For many years the basic ESXi
was actually free and customers only paid for the advanced vSphere
stuff.
For KVM there are many products to choose from. Redhat has
Redhat OpenShift Virtualization (it used to be Redhat Virtualization,
but it came under the OpenShift umbrella when containers took
off). The big cloud vendors that may be managing millions of
servers must have some custom tools for that. You gave a link
to someone switching to the OpenNebula product. Proxmox VE is
another option. Lots of different products with different
feature sets to match different requirements.
In article <vio70q$e1fp$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/3/2024 3:24 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:
If you look at what is available and what it is used for then you will >>>> see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
What people discovered was, they needed to run full-fat system management >>> suites, reporting tools, backup/maintenance tools etc on the hypervisor. >>> In other words, all the regular filesystem-management functions you need >>> on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
cut it any more -- virtualization is nowadays done on full-function Linux >>> kernels (all “type 2”).
Having a full host OS is very nice for a development system with a few
VM's to build and test various stuff.
It does not scale to a large production environment. For that you need
central management servers.
There are some very senior engineers at Google and Amazon who
run the largest VM-based production environments on the planet
and they disagree. There, VMs run under a "full host OS."
In article <vio70q$e1fp$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
ESXi has the vSphere suite of products. For many years the basic ESXi
was actually free and customers only paid for the advanced vSphere
stuff.
For KVM there are many products to choose from. Redhat has
Redhat OpenShift Virtualization (it used to be Redhat Virtualization,
but it came under the OpenShift umbrella when containers took
off). The big cloud vendors that may be managing millions of
servers must have some custom tools for that. You gave a link
to someone switching to the OpenNebula product. Proxmox VE is
another option. Lots of different products with different
feature sets to match different requirements.
It's unclear what you think that KVM is. KVM requires a
userspace component to actually drive the VCPUs; that runs under
Linux, which is a "full host OS." At least Google uses the same
management tools to drive those processes as it uses for the
rest of its production services (e.g., borg, etc). The
userspace component for GCP is not QEMU, but rather, a Google
authored program. However, it is in all-respects just another
google3 binary.
On 12/3/2024 7:41 PM, Dan Cross wrote:
In article <vio70q$e1fp$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
On 12/3/2024 3:24 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:
If you look at what is available and what it is used for then you will >>>>> see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
What people discovered was, they needed to run full-fat system management >>>> suites, reporting tools, backup/maintenance tools etc on the hypervisor. >>>> In other words, all the regular filesystem-management functions you need >>>> on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
cut it any more -- virtualization is nowadays done on full-function Linux >>>> kernels (all “type 2”).
Having a full host OS is very nice for a development system with a few
VM's to build and test various stuff.
It does not scale to a large production environment. For that you need
central management servers.
There are some very senior engineers at Google and Amazon who
run the largest VM-based production environments on the planet
and they disagree. There, VMs run under a "full host OS."
You totally missed the point.
With KVM they do have a full host OS.
But they don't need it to "run full-fat system management
suites, reporting tools, backup/maintenance tools etc on
the hypervisor", because they don't manage all those VM's
that way. That would be impossible.
On 12/3/2024 7:41 PM, Dan Cross wrote:
In article <vio70q$e1fp$1@dont-email.me>,
Arne Vajhøj <arne@vajhoej.dk> wrote:
ESXi has the vSphere suite of products. For many years the basic ESXi
was actually free and customers only paid for the advanced vSphere
stuff.
For KVM there are many products to choose from. Redhat has
Redhat OpenShift Virtualization (it used to be Redhat Virtualization,
but it came under the OpenShift umbrella when containers took
off). The big cloud vendors that may be managing millions of
servers must have some custom tools for that. You gave a link
to someone switching to the OpenNebula product. Proxmox VE is
another option. Lots of different products with different
feature sets to match different requirements.
It's unclear what you think that KVM is. KVM requires a
userspace component to actually drive the VCPUs; that runs under
Linux, which is a "full host OS." At least Google uses the same
management tools to drive those processes as it uses for the
rest of its production services (e.g., borg, etc). The
userspace component for GCP is not QEMU, but rather, a Google
authored program. However, it is in all-respects just another
google3 binary.
That is the general model.
central management server---(network)---management agent---hypervisor
Details can vary but that is the only way to manage at scale.
And which is why the claim that the hypervisor has to come with
a full host OS does not hold water for large production
environments.
They just need the very basic OS, the virtualization service
and the agent.
Google could tailor down the Linux KVM they use to the very
minimum if they wanted to. But I have no idea if they have
actually bothered doing so.
On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
From what you wrote seem that ESXi is more similar to Xen than to
KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>> while in KVM+qemu some (frequently most) programs is running
unvirtualized and only rest is virtualized.
I think that dates back to the old distinction between “type 1” and “type
2“ hypervisors. It’s an obsolete distinction nowadays.
No.
If you look at what is available and what it is used for then you will
see that what is labeled type 1 is used for production and what is
labeled type 2 is used for development. It matters.
Arne
In article <vil9jg$3ives$3@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:
. . . a company which switched from VMware to an open-source
alternative as a result of Broadcom's massive price hikes,
and encountered an unexpected benefit: the resources consumed
by system management overhead on the new product were so much
less, they could run more VMs on the same hardware.
That will be nice if it happens, but the pricing is a fully sufficient
reason for moving. The way that some companies are seeing 1,000%, while others see 300% or 500% makes customers very suspicious that Broadcom are trying to jack up the price as much as each customer will take. If so,
they aren't very good at that.
My employer was given a special one-off offer of 500% and went "Hell,
no!"
Are you sure your employer's response was not a little more Anglo-Saxon
in nature ? :-)
On a more serious note, does anyone else think Broadcom are showing absolute contempt towards their users ? It reminds me of the person who took over supply of a vital medical drug in the US a few years ago and promptly increased the price massively because the users of the drug where a capture
market that _needed_ to buy the drug.
This is so blatant by Broadcom, I'm surprised the EU has not got more seriously involved.
Simon.
On a more serious note, does anyone else think Broadcom are showing absolute contempt towards their users ?
supply of a vital medical drug in the US a few years ago and promptly increased the price massively because the users of the drug where a capture market that _needed_ to buy the drug.
On 12/4/2024 8:20 AM, Simon Clubley wrote:
On a more serious note, does anyone else think Broadcom are showing absolute >> contempt towards their users ?
I am expecting companies to attempt to maximize profit.
That expectation tend to minimize disappointment. :-)
Question is of course whether Broadcom is maximizing profit!
The pricing strategy seems to be to cash in now and not worry about
long term as opposed to try and setup a long term steady income.
Given the move to containers and cloud then I actually think that it may
be a profit maximizing strategy. With a shrinking market then the value
of long term is not so big.
But that raises another question: why did they pay so much? The price
hikes may be profit maximizing, but it will not bring in what they paid
for VMWare.
It reminds me of the person who took over
supply of a vital medical drug in the US a few years ago and promptly
increased the price massively because the users of the drug where a capture >> market that _needed_ to buy the drug.
This guy:
https://en.wikipedia.org/wiki/Martin_Shkreli
On 2024-12-02, John Dallman <jgd@cix.co.uk> wrote:
In article <vil9jg$3ives$3@dont-email.me>, ldo@nz.invalid (Lawrence
D'Oliveiro) wrote:
. . . a company which switched from VMware to an open-source
alternative as a result of Broadcom's massive price hikes,
and encountered an unexpected benefit: the resources consumed
by system management overhead on the new product were so much
less, they could run more VMs on the same hardware.
That will be nice if it happens, but the pricing is a fully sufficient
reason for moving. The way that some companies are seeing 1,000%, while
others see 300% or 500% makes customers very suspicious that Broadcom are
trying to jack up the price as much as each customer will take. If so,
they aren't very good at that.
My employer was given a special one-off offer of 500% and went "Hell,
no!"
Are you sure your employer's response was not a little more Anglo-Saxon
in nature ? :-)
On a more serious note, does anyone else think Broadcom are showing absolute contempt towards their users ? It reminds me of the person who took over supply of a vital medical drug in the US a few years ago and promptly increased the price massively because the users of the drug where a capture market that _needed_ to buy the drug.
This is so blatant by Broadcom, I'm surprised the EU has not got more seriously involved.
Simon.