• VMWARE/ESXi Linux

    From David Turner@21:1/5 to All on Wed Nov 27 16:33:56 2024
    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still Linux

    So basically another layer to fail before VMS loads. Wonder why people
    are not using the real Alpha or Integrity as cheap as they are

    DT

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Robert A. Brooks@21:1/5 to David Turner on Wed Nov 27 16:53:14 2024
    On 11/27/2024 4:33 PM, David Turner wrote:
    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel....  stripped down but still Linux

    So basically another layer to fail before VMS loads. Wonder why people are not
    using the real Alpha or Integrity as cheap as they are


    For many reasons, but foremost among them is that the lords of the datacenter have spoken -- they don't want strange, proprietary hardware.

    Independent of whatever type of hypervisor ESXi is, we find it to be rock-solid hosting our array of testing, development, and production systems.

    We also find our KVM-based hypervisors to be solid as well.

    As Reagan stated earlier today, VirtualBox is used at VSI in various places, and
    works well for many folks, primarily for development. While it's not listed as a supported hypervisor by VSI, we make sure that we don't break anything on VirtualBox when we're making a fix for something to work (or work better) on KVM
    or ESXi.

    While I used VirtualBox on CentOS early on in the port, I now use ESXi exclusively, primarily because I prefer direct fibre channel access to my SAN.

    My ESXi server (Proliant DL380 Gen10) goes months without a reboot, which is usually to install new firmware on the hardware. We make sure that we always test the latest fibre HBA firmware ASAP.

    Yeah, MSCP-served volumes by IA64 and Alpha work (and I still use it for some devices), but my cluster provides a lot of testing capability where direct fibre
    access is important.

    --
    --- Rob

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to David Turner on Wed Nov 27 22:00:22 2024
    On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:

    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still Linux

    And not even using the native KVM virtualization architecture that is
    built into Linux.

    So basically another layer to fail before VMS loads. Wonder why people
    are not using the real Alpha or Integrity as cheap as they are

    Marketing. Decisions made by PHBs rather than engineers.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to D'Oliveiro on Wed Nov 27 22:24:00 2024
    In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:
    On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:
    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still
    Linux

    And not even using the native KVM virtualization architecture that
    is built into Linux.

    History: VMware ESXi was released in 2001 and KVM was merged into the
    Linux kernel in 2007.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to David Turner on Wed Nov 27 19:12:45 2024
    On 11/27/2024 4:33 PM, David Turner wrote:
    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel....  stripped down but still Linux

    I don't know who is telling you that. But not so.

    ESXi has its own proprietary kernel called VMKernel.

    You can probably call it Linux inspired.

    Similar file system layout. Compatible API subset (but not full
    compatible API). Similar driver architecture. Similar CLI experience
    (BusyBox provide the usual CLI interface).

    But not based on Linux kernel code. And not fully compatible. And
    not all functionality (it is specialized for hypervisor role).

    ESX is different than ESXi. ESX uses some Redhat stuff.

    WorkStation and Player are different again - type 2.

    So basically another layer to fail before VMS loads.

    There is decades of experience for ESXi reliability not being
    a problem in practice.

    Wonder why people
    are not using the real Alpha or Integrity as cheap as they are

    You can't buy a new one.

    They (especially Alpha) are pretty slow compared to new systems.

    Corporate IT does not like special hardware and public cloud
    providers does not offer them.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Thu Nov 28 01:03:05 2024
    On Wed, 27 Nov 2024 19:12:45 -0500, Arne Vajhøj wrote:

    ESXi has its own proprietary kernel called VMKernel.

    You can probably call it Linux inspired.

    Similar file system layout. Compatible API subset (but not full
    compatible API). Similar driver architecture. Similar CLI experience
    (BusyBox provide the usual CLI interface).

    But not based on Linux kernel code. And not fully compatible. And not
    all functionality (it is specialized for hypervisor role).

    In other words, it originated back in the day when the difference between “Type 1” and “Type 2” hypervisors was important.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to John Dallman on Thu Nov 28 01:00:45 2024
    On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote:

    In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:

    On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:

    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still
    Linux

    And not even using the native KVM virtualization architecture that is
    built into Linux.

    History: VMware ESXi was released in 2001 and KVM was merged into the
    Linux kernel in 2007.

    In other words, VMware has long been obsoleted by better solutions.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Matthew R. Wilson@21:1/5 to Lawrence D'Oliveiro on Thu Nov 28 08:39:39 2024
    On 2024-11-28, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote:

    In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:

    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still
    Linux

    And not even using the native KVM virtualization architecture that is
    built into Linux.

    History: VMware ESXi was released in 2001 and KVM was merged into the
    Linux kernel in 2007.

    In other words, VMware has long been obsoleted by better solutions.

    Please explain how ESXi is obsolete, and how KVM is a better solution.

    Both KVM and ESXi use the processor's VT-d (or AMD's equivalent, AMD-Vi) extensions on x86 to efficiently handle instructions that require
    hypervisor intervention. I'm not sure how you'd judge which one is a
    better solution in that regard. So the only thing that matters, really,
    is the virtualization of everything other than the processor itself.

    KVM is largely dependent on qemu to provide the rest of the actual
    virtual system. qemu's a great project and I run a ton of desktop VMs
    with qemu+KVM, but it just doesn't have the level of maturity or
    edge-case support that ESXi does. Pretty much any x86 operating system, historical or current, _just works_ in ESXi. With qemu+KVM, you're
    going to have good success with the "big name" OSes...Windows, Linux,
    the major BSDs, etc., but you're going to be fighting with quirks and
    problems if you're trying, say, old OS/2 releases. That's not relevant
    for most people looking for virtualization solutions, and the problems
    aren't always insurmountable, but you're claiming that KVM is a "better" solution, whereas in my experience, in reality, ESXi is the better
    technology.

    (As an aside, VMWare's _desktop_ [not server] virtualization product,
    VMWare Workstation, looks like it's making moves to use KVM under the
    hood, but they have said they will continue using their own proprietary
    virtual devices and drivers, which is really what sets VMWare apart from
    qemu. This is a move they've already made on both the Windows and Mac OS version of VMWare Workstation if I understand correctly [utilizing
    Hyper-V and Apple's Virtualization framework]. This makes sense... as I
    said, the underlying virtualization of the processor is being handled by
    the VT-x capabilities of the processor whether you're using VMWare,
    VirtualBox, KVM, etc., so when running a desktop product under Linux,
    you may as well use KVM but you still need other software to build the
    rest of the virtual system and its virtual devices, so that's where
    VMWare and qemu will still differentiate themselves. None of this is
    relevant for ESXi, though, because as has been pointed out earlier in
    the thread, it is not running on Linux at all, so VMKernel is providing
    its own implementation of, essentially, what KVM provides in the Linux
    kernel.)

    qemu and KVM have the huge advantage that they are open source and free software, of course, whereas ESXi (and vCenter) are closed source and
    expensive (barring the old no-cost ESXi license).

    But ESXi just works. It's solid, it has a huge infrastructure around it
    for vSAN stuff, virtual networking management, vMotion "just works," I
    find the management interface nicer than, say, Proxmox (although Proxmox
    is an impressive product), etc.

    It's sad to see Broadcom is going to do everything they can to drive
    away the VMWare customer base. VMWare will lose its market-leader
    position, FAR fewer people will learn about it and experiment with it
    since Broadcom killed the no-cost ESXi licenses, and popularity of
    Proxmox is going to skyrocket, I suspect. Which isn't a bad thing --
    when open source solutions get attention and traction, they continue to improve, and as I said earlier, Proxmox is already an impressive product
    so I look forward to its future.

    But make no mistake: VMWare was -- and I'd say still is -- the gold
    standard for virtualization, both on the server (ESXi) and the
    workstation (VMWare Workstation). VMWare's downfall at the hands of
    Broadcom will 100% be due to Broadcom's business practices, not
    technology.

    I'm a bit of a free software zealot, yet even I still use ESXi for my
    "real" servers. I do look forward to eventually replacing my ESXi boxes
    with Proxmox for philosophical reasons, but I'm in no rush.

    -Matthew

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Gcalliet@21:1/5 to All on Thu Nov 28 14:44:37 2024
    Le 27/11/2024 à 22:33, David Turner a écrit :
    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel....  stripped down but still Linux

    So basically another layer to fail before VMS loads. Wonder why people
    are not using the real Alpha or Integrity as cheap as they are

    DT
    Dear David,

    Don't you think you are a little bit obsolete? It's the second time the question is asked. And the second time the discussion evolves to the
    comparison between virtualization solutions.

    It's a free media, and everyone can choose about what he wants to speak
    about.

    These discussions demonstrate where are the major interests. So?

    I remember the 2013-2014 years where I was going around Europ with Kevin
    saying that perhaps VMS wasn't a major ecosystem, but there was interest
    on going on with it. I remember saying here the same thing. And I was at
    that time, as you, totaly obsolete.

    So it seems there are 2 obsolete guys on c.o.v - not mentionning Subcommandante, who has to going on being anonymous -.

    Today it seems I have been right. VSI cannot say the contrary.

    Something has to be said. Main stream is always very important. And a
    lot of things are to be done about main stream. But thinking only for
    main stream situations is always wrong.

    Thinking only from main stream? Example: HP' board have a look on the
    n'th column in Excel for the business report about VMS... so VMS is
    dead. Hum !

    For sure the bare metal option for VMS in an exception from an
    exception. So it's wrong? Perhaps.

    But.

    There is somehow a big structural mistake about the way we choose virtualization only for VMS.

    They say: we cannot go on the same pace of hardware renew. And using virtualization garanties we'll not have to do so. Right.

    Almost right. Because there will be same problems when mister Broadcom
    for VMSware, mister IBM for Red Hat, mister Oracle for VM or KVM will
    decide to create new versions, because of the gap with new hardware. It
    is for sure that with virtualization the pace is more sustainable.

    But the wrongness of the argument is here: everywhere you choose to go,
    you have got the problem of the contradiction between sustainability,
    good pace of evolution and the way main stream of evolution is going.
    You cannot forget this problem arguing virtualization is The Solution.
    There will be always the problem, and we have to cope with it.

    Dear obsolete friend, I can say we are not so isolated. Sometimes things
    can be understood observing other landscapes. The big word I think about
    is: LTS. How surprising it can be, a lot of software suppliers,
    developers, for a lot of OSes, langages, applications are offering LTS
    offers - sometimes with more money -.

    Perhaps saying that could make us too sad. But VMS is structurally in
    the domain of LTS. And because of that perhaps the concept "obsolete"
    has to be revisited for us.

    We have to be able to support a huge time of specific pace of sites,
    garantying evolution in paces choosed, responding to minorities
    demands... And yes the business plans about that are to be adapted. I
    don't live in Massachusset or in Danmark, but I do thing a lot of
    thinking is already here about cycling economics, sustanaibility in economics...

    (On my side, I bought for 150 € a gen9 proliant and booted VMS on it. I
    bet in the 2034 year, my VMS will go on, the same year Version 222 of
    Red Hat will be a huge problem :) - end of the parenthesis).

    More on that.

    I know MIT has not always be as fun as Berkeley - because of that VMS is
    better that Unix, everyone knows that :) -. So the social evolutions are perhaps a little bit out of concern in Boston. But there is a fact: sustainability becomes in the world a major concept. A lot of people do
    think something as: a LTS world could be fine. Sustainability is so a
    huge domain for innovation. VMS, not at all created for that, I know,
    could be a domain of investigating about new ways for sustanaibility.

    My conclusion, dear obsolete: you and me and Subcommandante are
    innovators. Somehow a difficult position, because not in a majority...
    as usual for innovation.

    "Vous pouvez revenir aux affaires courantes".

    gcalliet

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Matthew R. Wilson on Thu Nov 28 13:24:17 2024
    In article <slrnvkgb2b.2dr8a.mwilson@daenerys.home.mattwilson.org>,
    Matthew R. Wilson <mwilson@mattwilson.org> wrote:
    On 2024-11-28, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote:

    In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:

    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still
    Linux

    And not even using the native KVM virtualization architecture that is
    built into Linux.

    History: VMware ESXi was released in 2001 and KVM was merged into the
    Linux kernel in 2007.

    In other words, VMware has long been obsoleted by better solutions.

    Please explain how ESXi is obsolete, and how KVM is a better solution.

    I wouldn't bother trying to argue with him: he's a known troll.

    Both KVM and ESXi use the processor's VT-d (or AMD's equivalent, AMD-Vi) >extensions on x86 to efficiently handle instructions that require
    hypervisor intervention. I'm not sure how you'd judge which one is a
    better solution in that regard. So the only thing that matters, really,
    is the virtualization of everything other than the processor itself.

    So Goldberg defined two "types" of hypervisor in his
    dissertation: Types 1 and 2. Of course, this is an over
    simplification, and those of us who work on OSes and hypervisors
    understand that these distinctions are blurry and more on a
    continuum than hard and fast buckets, but to a first order
    approximation these categories are useful.

    Roughly, a Type-1 hypervisor is one that runs on the bare metal
    and only supports guests; usually some special guest is
    designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
    examples of Type-1 hypervisors.

    Again, roughly, a Type-2 hypervisor is one that runs in the
    context of an existing operating system, using its services and
    implementation for some of its functionality; examples include
    KVM (they _say_ it's type 1, but that's really not true) and
    PA1050. Usually with a Type-2 HV you've got a userspace program
    running under the host operating system that provides control
    functionality, device models, and so on. QEMU is an example of
    such a thing (sometimes, confusingly, this is called the
    hypervisor while the kernel-resident component, is called the
    Virtual Machine Monitor, or VMM), but other examples exist:
    CrosVM, for instance.

    KVM is largely dependent on qemu to provide the rest of the actual
    virtual system.

    I think that QEMU is what one _often_ uses, but it doesn't have
    to be. I mentioned CrosVM above, which works with KVM, but
    other examplex exist: Google, Amazon, and AliBaba all use KVM on
    their cloud offerings, but at least neither Google nor Amazon
    use QEMU; I don't know about AliBaba but I suspect they have
    their own. (Microsoft of course uses Hyper-V.)

    qemu's a great project and I run a ton of desktop VMs
    with qemu+KVM, but it just doesn't have the level of maturity or
    edge-case support that ESXi does. Pretty much any x86 operating system, >historical or current, _just works_ in ESXi. With qemu+KVM, you're
    going to have good success with the "big name" OSes...Windows, Linux,
    the major BSDs, etc., but you're going to be fighting with quirks and >problems if you're trying, say, old OS/2 releases. That's not relevant
    for most people looking for virtualization solutions, and the problems
    aren't always insurmountable, but you're claiming that KVM is a "better" >solution, whereas in my experience, in reality, ESXi is the better >technology.

    (As an aside, VMWare's _desktop_ [not server] virtualization product,
    VMWare Workstation, looks like it's making moves to use KVM under the
    hood, but they have said they will continue using their own proprietary >virtual devices and drivers, which is really what sets VMWare apart from >qemu. This is a move they've already made on both the Windows and Mac OS >version of VMWare Workstation if I understand correctly [utilizing
    Hyper-V and Apple's Virtualization framework]. This makes sense... as I
    said, the underlying virtualization of the processor is being handled by
    the VT-x capabilities of the processor whether you're using VMWare, >VirtualBox, KVM, etc., so when running a desktop product under Linux,
    you may as well use KVM but you still need other software to build the
    rest of the virtual system and its virtual devices, so that's where
    VMWare and qemu will still differentiate themselves. None of this is
    relevant for ESXi, though, because as has been pointed out earlier in
    the thread, it is not running on Linux at all, so VMKernel is providing
    its own implementation of, essentially, what KVM provides in the Linux >kernel.)

    Well, what KVM provides+a whole lot more. ESXi is effectively
    its own operating system, even though it's marketed as a type-1
    HV.

    qemu and KVM have the huge advantage that they are open source and free >software, of course, whereas ESXi (and vCenter) are closed source and >expensive (barring the old no-cost ESXi license).

    But ESXi just works. It's solid, it has a huge infrastructure around it
    for vSAN stuff, virtual networking management, vMotion "just works," I
    find the management interface nicer than, say, Proxmox (although Proxmox
    is an impressive product), etc.

    It's sad to see Broadcom is going to do everything they can to drive
    away the VMWare customer base. VMWare will lose its market-leader
    position, FAR fewer people will learn about it and experiment with it
    since Broadcom killed the no-cost ESXi licenses, and popularity of
    Proxmox is going to skyrocket, I suspect. Which isn't a bad thing --
    when open source solutions get attention and traction, they continue to >improve, and as I said earlier, Proxmox is already an impressive product
    so I look forward to its future.

    But make no mistake: VMWare was -- and I'd say still is -- the gold
    standard for virtualization, both on the server (ESXi) and the
    workstation (VMWare Workstation). VMWare's downfall at the hands of
    Broadcom will 100% be due to Broadcom's business practices, not
    technology.

    Yup, it's a bit sad, though it does open up a lot of market
    opportunities for other players.

    I'm a bit of a free software zealot, yet even I still use ESXi for my
    "real" servers. I do look forward to eventually replacing my ESXi boxes
    with Proxmox for philosophical reasons, but I'm in no rush.

    Check out Bhyve; it's very nice.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Matthew R. Wilson on Thu Nov 28 13:24:23 2024
    On 11/28/2024 3:39 AM, Matthew R. Wilson wrote:
    But ESXi just works. It's solid, it has a huge infrastructure around it
    for vSAN stuff, virtual networking management, vMotion "just works," I
    find the management interface nicer than, say, Proxmox (although Proxmox
    is an impressive product), etc.

    It's sad to see Broadcom is going to do everything they can to drive
    away the VMWare customer base. VMWare will lose its market-leader
    position, FAR fewer people will learn about it and experiment with it
    since Broadcom killed the no-cost ESXi licenses, and popularity of
    Proxmox is going to skyrocket, I suspect. Which isn't a bad thing --
    when open source solutions get attention and traction, they continue to improve, and as I said earlier, Proxmox is already an impressive product
    so I look forward to its future.

    But make no mistake: VMWare was -- and I'd say still is -- the gold
    standard for virtualization, both on the server (ESXi) and the
    workstation (VMWare Workstation). VMWare's downfall at the hands of
    Broadcom will 100% be due to Broadcom's business practices, not
    technology.

    I don't think the Broadcom prices hike will start ESXi
    decline - I think it will accelerate it.

    Even before the acquisition and price hike, then ESXi
    was heading for decline.

    Some years ago then VM's in advanced setups was
    what the vast majority of enterprise IT used. And
    MS fanatics choose Hyper-V, Linux fanatics choose
    KVM, but the majority choose ESXi. So ESXi market
    share was huge and VMWare was making good money.

    But that is not how the enterprise IT world
    look today. Today there are 3 possible setups:
    1) public cloud
    2) on-prem with containers either on bare metal
    or on VM in very basic setup (because k8s and
    other container stuff provide all the advanced functionality)
    3) on-prem with traditional VM's

    #1 is not ESXi as the big cloud vendors do not want
    to pay and they want to customize. #2 does not need to
    be ESXi as no advanced features are needed so any
    virtualization is OK and ESXi cost. #3 is all that
    is left for ESXi to shine with its advanced features.

    So even before price hikes then ESXi was headed towards only
    legacy systems and on-prem stateful application that even
    though they can be deployed in k8s doesn't really
    lean themselves to it in the same was as stateless
    applications.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to All on Thu Nov 28 19:30:00 2024
    In article <viacgn$kv9u$1@dont-email.me>, arne@vajhoej.dk (Arne Vajh°j)
    wrote:

    But that is not how the enterprise IT world
    look today. Today there are 3 possible setups:
    1) public cloud
    2) on-prem with containers either on bare metal
    or on VM in very basic setup (because k8s and
    other container stuff provide all the advanced functionality)
    3) on-prem with traditional VM's

    #1 is not ESXi as the big cloud vendors do not want
    to pay and they want to customize. #2 does not need to
    be ESXi as no advanced features are needed so any
    virtualization is OK and ESXi cost. #3 is all that
    is left for ESXi to shine with its advanced features.

    My employers have a mixture of all three, with a lot of #3 for automated software testing with confidential data. ESXi didn't come out with an
    Aarch64 version fast enough, which got KVM into use, and now the plan is
    to go all-KVM because Broadcom wants too much money all at once. But if
    they hadn't done that, we'd have happily stayed with ESXi.


    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Matthew R. Wilson on Thu Nov 28 21:29:27 2024
    On Thu, 28 Nov 2024 08:39:39 -0000 (UTC), Matthew R. Wilson wrote:

    Please explain how ESXi is obsolete, and how KVM is a better solution.

    KVM is built into the mainline kernel, is the basis of a braod range of virtualization solutions, and has broad support among the Linux community.
    The fact that Broadcom has had to raise prices tells you all you need to
    know about the costs of maintaining proprietary solutions.

    KVM is largely dependent on qemu to provide the rest of the actual
    virtual system.

    QEMU is purely a userland product; KVM is a kernel feature. The two are
    really quite independent.

    qemu's a great project and I run a ton of desktop VMs
    with qemu+KVM, but it just doesn't have the level of maturity or
    edge-case support that ESXi does.

    Fine. Keep on paying the higher prices that Broadcom demands, then.
    Obviously you think they are worth the money.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to D'Oliveiro on Thu Nov 28 22:08:00 2024
    In article <vianbn$na9e$2@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    The fact that Broadcom has had to raise prices tells you all you
    need to know about the costs of maintaining proprietary solutions.

    Not so. VMware was highly profitable before Broadcom bought them.
    Broadcom decided to raise prices anyway; they have not attempted to
    defend the rises as a necessity.

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Lawrence D'Oliveiro on Thu Nov 28 19:06:50 2024
    On 11/28/2024 4:29 PM, Lawrence D'Oliveiro wrote:
    On Thu, 28 Nov 2024 08:39:39 -0000 (UTC), Matthew R. Wilson wrote:
    Please explain how ESXi is obsolete, and how KVM is a better solution.

    KVM is built into the mainline kernel, is the basis of a braod range of virtualization solutions, and has broad support among the Linux community.

    ESXi has broad support in both Linux and Windows community. Or
    at least had.

    The fact that Broadcom has had to raise prices tells you all you need to
    know about the costs of maintaining proprietary solutions.

    That argument does not make any sense.

    ESXi is bringing in billions of dollars in annual revenue.

    500 software engineers at a mixed across the world
    average cost of 250 K$ is just 125 M$ per year.

    The price Broadcom paid for VMWare of 69 B$ multiplied
    with an expected ROI of 10% is 6.9 B$ per year.

    That is a factor 50 in difference.

    You can change number of engineers down to 250 or up a 1000
    or change ROI down to 8% or up to 15%, but there is simply
    no way that engineering cost get close to paying back the
    investment.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Mon Dec 2 21:42:09 2024
    Interesting report <https://arstechnica.com/information-technology/2024/12/company-claims-1000-percent-price-hike-drove-it-from-vmware-to-open-source-rival/>
    on a company which switched from VMware to an open-source alternative
    as a result of Broadcom’s massive price hikes, and encountered an
    unexpected benefit: the resources consumed by system management
    overhead on the new product were so much less, they could run more VMs
    on the same hardware.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From John Dallman@21:1/5 to D'Oliveiro on Mon Dec 2 22:26:00 2024
    In article <vil9jg$3ives$3@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    . . . a company which switched from VMware to an open-source
    alternative as a result of Broadcom's massive price hikes,
    and encountered an unexpected benefit: the resources consumed
    by system management overhead on the new product were so much
    less, they could run more VMs on the same hardware.

    That will be nice if it happens, but the pricing is a fully sufficient
    reason for moving. The way that some companies are seeing 1,000%, while
    others see 300% or 500% makes customers very suspicious that Broadcom are trying to jack up the price as much as each customer will take. If so,
    they aren't very good at that.

    My employer was given a special one-off offer of 500% and went "Hell,
    no!"

    John

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Lawrence D'Oliveiro on Mon Dec 2 19:11:30 2024
    On 12/2/2024 4:42 PM, Lawrence D'Oliveiro wrote:
    Interesting report <https://arstechnica.com/information-technology/2024/12/company-claims-1000-percent-price-hike-drove-it-from-vmware-to-open-source-rival/>
    on a company which switched from VMware to an open-source alternative
    as a result of Broadcom’s massive price hikes, and encountered an unexpected benefit: the resources consumed by system management
    overhead on the new product were so much less, they could run more VMs
    on the same hardware.

    There is no doubt that customers are leaving VMWare.

    The price hikes are so huge that they cannot be ignored.
    Some stay, some migrate, but almost everyone will do the
    analysis.

    Regarding the specific story, then two things worth noting
    after reading:
    https://en.wikipedia.org/wiki/OpenNebula

    A migration from ESXi to OpenNebula is not a migration from
    one brand of VM to another brand of VM. OpenNebula is KVM for VM +
    containers + FaaS/serverless. So it seems like the company
    did an architectural modernization not just a VM vendor change.

    While OpenNebula CE is open source then:
    <quote>
    OpenNebula CE is free and open-source software, released under the
    Apache License version 2. OpenNebula CE comes with free access to patch releases containing critical bug fixes but with no access to the regular
    EE maintenance releases. Upgrades to the latest minor/major version is
    only available for CE users with non-commercial deployments or with
    significant open source contributions to the OpenNebula
    Community. OpenNebula EE is distributed under a closed-source license
    and requires a commercial Subscription.
    </quote>

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Waldek Hebisch@21:1/5 to Matthew R. Wilson on Tue Dec 3 03:09:15 2024
    Matthew R. Wilson <mwilson@mattwilson.org> wrote:
    On 2024-11-28, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote:

    In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:

    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still
    Linux

    And not even using the native KVM virtualization architecture that is
    built into Linux.

    History: VMware ESXi was released in 2001 and KVM was merged into the
    Linux kernel in 2007.

    In other words, VMware has long been obsoleted by better solutions.

    Please explain how ESXi is obsolete, and how KVM is a better solution.

    Both KVM and ESXi use the processor's VT-d (or AMD's equivalent, AMD-Vi) extensions on x86 to efficiently handle instructions that require
    hypervisor intervention. I'm not sure how you'd judge which one is a
    better solution in that regard. So the only thing that matters, really,
    is the virtualization of everything other than the processor itself.

    Little nitpick: virtualization need to handle _some_ system instructions.
    But with VT-d and particularly with nested page tables this should
    be easy.

    KVM is largely dependent on qemu to provide the rest of the actual
    virtual system. qemu's a great project and I run a ton of desktop VMs
    with qemu+KVM, but it just doesn't have the level of maturity or
    edge-case support that ESXi does. Pretty much any x86 operating system, historical or current, _just works_ in ESXi. With qemu+KVM, you're
    going to have good success with the "big name" OSes...Windows, Linux,
    the major BSDs, etc., but you're going to be fighting with quirks and problems if you're trying, say, old OS/2 releases. That's not relevant
    for most people looking for virtualization solutions, and the problems
    aren't always insurmountable, but you're claiming that KVM is a "better" solution, whereas in my experience, in reality, ESXi is the better technology.

    What you wrote is now very atypical use: faithfully implementing
    all quirks of real devices. More typical case is guest which
    knows that it is running on a hypervisor and uses virtual
    interface with no real counterpart. For this quality of
    virtual interfaces matters. I do not know how ESXi compares
    to KVM, but I know that "equivalent" but different virtual
    interfaces in qemu+KVM may have markedly different performance.

    (As an aside, VMWare's _desktop_ [not server] virtualization product,
    VMWare Workstation, looks like it's making moves to use KVM under the
    hood, but they have said they will continue using their own proprietary virtual devices and drivers, which is really what sets VMWare apart from qemu. This is a move they've already made on both the Windows and Mac OS version of VMWare Workstation if I understand correctly [utilizing
    Hyper-V and Apple's Virtualization framework]. This makes sense... as I
    said, the underlying virtualization of the processor is being handled by
    the VT-x capabilities of the processor whether you're using VMWare, VirtualBox, KVM, etc., so when running a desktop product under Linux,
    you may as well use KVM but you still need other software to build the
    rest of the virtual system and its virtual devices, so that's where
    VMWare and qemu will still differentiate themselves. None of this is
    relevant for ESXi, though, because as has been pointed out earlier in
    the thread, it is not running on Linux at all, so VMKernel is providing
    its own implementation of, essentially, what KVM provides in the Linux kernel.)

    From what you wrote seem that ESXi is more similar to Xen than to
    KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs
    while in KVM+qemu some (frequently most) programs is running unvirtualized
    and only rest is virtualized. I do not know if this sets limits on quality
    of virtualization, but that could be valid reason for ESXi to provide its
    own kernel.

    --
    Waldek Hebisch

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to Waldek Hebisch on Tue Dec 3 04:57:19 2024
    On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:

    From what you wrote seem that ESXi is more similar to Xen than to
    KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs while in KVM+qemu some (frequently most) programs is running
    unvirtualized and only rest is virtualized.

    I think that dates back to the old distinction between “type 1” and “type 2“ hypervisors. It’s an obsolete distinction nowadays.

    And don’t forget there are other options besides full virtualization. For example, Linux offers “container” technologies of various sorts, where multiple userlands run under the same kernel.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Waldek Hebisch on Tue Dec 3 09:33:45 2024
    On 12/2/2024 10:09 PM, Waldek Hebisch wrote:
    Matthew R. Wilson <mwilson@mattwilson.org> wrote:
    KVM is largely dependent on qemu to provide the rest of the actual
    virtual system. qemu's a great project and I run a ton of desktop VMs
    with qemu+KVM, but it just doesn't have the level of maturity or
    edge-case support that ESXi does. Pretty much any x86 operating system,
    historical or current, _just works_ in ESXi. With qemu+KVM, you're
    going to have good success with the "big name" OSes...Windows, Linux,
    the major BSDs, etc., but you're going to be fighting with quirks and
    problems if you're trying, say, old OS/2 releases. That's not relevant
    for most people looking for virtualization solutions, and the problems
    aren't always insurmountable, but you're claiming that KVM is a "better"
    solution, whereas in my experience, in reality, ESXi is the better
    technology.

    What you wrote is now very atypical use: faithfully implementing
    all quirks of real devices. More typical case is guest which
    knows that it is running on a hypervisor and uses virtual
    interface with no real counterpart. For this quality of
    virtual interfaces matters. I do not know how ESXi compares
    to KVM, but I know that "equivalent" but different virtual
    interfaces in qemu+KVM may have markedly different performance.

    Are you talking about paravirtual drivers?

    To get back to VMS then I don't think VMS got any of those.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Waldek Hebisch on Tue Dec 3 14:39:45 2024
    In article <vilsop$2qc5u$1@paganini.bofh.team>,
    Waldek Hebisch <antispam@fricas.org> wrote:
    Matthew R. Wilson <mwilson@mattwilson.org> wrote:
    On 2024-11-28, Lawrence D'Oliveiro <ldo@nz.invalid> wrote:
    On Wed, 27 Nov 2024 22:24 +0000 (GMT Standard Time), John Dallman wrote: >>>
    In article <vi84pm$6ct6$4@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    On Wed, 27 Nov 2024 16:33:56 -0500, David Turner wrote:

    I keep being told that VMWARE is not an OS in itself.
    But it is... based on Ubuntu Kernel.... stripped down but still
    Linux

    And not even using the native KVM virtualization architecture that is >>>>> built into Linux.

    History: VMware ESXi was released in 2001 and KVM was merged into the
    Linux kernel in 2007.

    In other words, VMware has long been obsoleted by better solutions.

    Please explain how ESXi is obsolete, and how KVM is a better solution.

    Both KVM and ESXi use the processor's VT-d (or AMD's equivalent, AMD-Vi)
    extensions on x86 to efficiently handle instructions that require
    hypervisor intervention. I'm not sure how you'd judge which one is a
    better solution in that regard. So the only thing that matters, really,
    is the virtualization of everything other than the processor itself.

    Little nitpick: virtualization need to handle _some_ system instructions.
    But with VT-d and particularly with nested page tables this should
    be easy.

    Sadly, not really. Virtualization needs to handle many
    instructions, of multiple types, and be able to do so gracefully
    and performantly. This includes, of course, the underlying
    hardware's supervisor instruction set and any privileged
    operations, but also those instructions that can leak data about
    the underlying hardware that the hypervisor would rather be
    hidden. Hence, `CPUID` forces an unconditional VM exit on x86.

    Moreover, there is the issue of unimplemented userspace
    instructions. Most virtualization systems provide a base
    "platform" that the guest may rely on, which will include some
    userspace instructions that may, or may not, be available on
    the underlying hardware. If a guest executes an instruction
    that is not implemented on the underlying hardware, even a
    non-privileged instruction, then the hypervisor must catch the
    resulting trap and emulate that instruction, and all of its
    side-effects. And in modern systems, this problem is
    exacerbated by VMs that can be migratated between different
    host systems over time. This, and suspension/resumption,
    also leads to all sorts of interesting edge cases that must be
    handled; how does one deal with TSC skew between systems, for
    example? What does a guest do when no time has elapsed from
    _its_ perspective, but it suddenly finds that real time has
    advanced by seconds, minutes, hours, or days?

    And with x86, even emulating simple instructions, like
    programmed IO, can be challenging. This is in part because
    VT-x does not bank the instruction bytes on the VMCS/VMCB on
    exit, so the hypervisor must look at the RIP from the exit, and
    then go and fetch the instruction bytes from the guest itself.
    But to do that the hypervisor must example the state of the VCPU
    closely and emulate what the CPU would do in the fetching
    process exactly; for example, if the CPU is using paging, the
    hypervisor must be careful to set the A bit on the PTEs for
    where it thinks the instruction is coming from; if that
    instruction spans a page boundary similarly, etc. And even then
    it cannot guarantee that it will do a perfect job: the VCPU may
    have been fetching from a page for which the TLB entry was stale
    and thus the instruction bytes the hypervisor reads following
    the guest's page tables may not be the actual bytes that the
    guest was reading.

    And this doesn't even begin to account for nested
    virtualization, which is easily an order of magnitude more work
    than simple virtualization.

    Also, see below.

    KVM is largely dependent on qemu to provide the rest of the actual
    virtual system. qemu's a great project and I run a ton of desktop VMs
    with qemu+KVM, but it just doesn't have the level of maturity or
    edge-case support that ESXi does. Pretty much any x86 operating system,
    historical or current, _just works_ in ESXi. With qemu+KVM, you're
    going to have good success with the "big name" OSes...Windows, Linux,
    the major BSDs, etc., but you're going to be fighting with quirks and
    problems if you're trying, say, old OS/2 releases. That's not relevant
    for most people looking for virtualization solutions, and the problems
    aren't always insurmountable, but you're claiming that KVM is a "better"
    solution, whereas in my experience, in reality, ESXi is the better
    technology.

    What you wrote is now very atypical use: faithfully implementing
    all quirks of real devices. More typical case is guest which
    knows that it is running on a hypervisor and uses virtual
    interface with no real counterpart. For this quality of
    virtual interfaces matters. I do not know how ESXi compares
    to KVM, but I know that "equivalent" but different virtual
    interfaces in qemu+KVM may have markedly different performance.

    While enlightenments are a thing, and paravirtualization can
    dramatically increase performance, handling unmodified guests is
    still a very important use case for pretty much every serious
    virtualization system. And that does mean handling all the
    quirks of not just the CPU, but also the device models that the
    hypervisor presents to the guest. That's a big job.

    (As an aside, VMWare's _desktop_ [not server] virtualization product,
    VMWare Workstation, looks like it's making moves to use KVM under the
    hood, but they have said they will continue using their own proprietary
    virtual devices and drivers, which is really what sets VMWare apart from
    qemu. This is a move they've already made on both the Windows and Mac OS
    version of VMWare Workstation if I understand correctly [utilizing
    Hyper-V and Apple's Virtualization framework]. This makes sense... as I
    said, the underlying virtualization of the processor is being handled by
    the VT-x capabilities of the processor whether you're using VMWare,
    VirtualBox, KVM, etc., so when running a desktop product under Linux,
    you may as well use KVM but you still need other software to build the
    rest of the virtual system and its virtual devices, so that's where
    VMWare and qemu will still differentiate themselves. None of this is
    relevant for ESXi, though, because as has been pointed out earlier in
    the thread, it is not running on Linux at all, so VMKernel is providing
    its own implementation of, essentially, what KVM provides in the Linux
    kernel.)

    From what you wrote seem that ESXi is more similar to Xen than to
    KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >while in KVM+qemu some (frequently most) programs is running unvirtualized >and only rest is virtualized. I do not know if this sets limits on quality >of virtualization, but that could be valid reason for ESXi to provide its
    own kernel.

    That's correct; ESXi and Xen are architecturally similar. KVM
    and VMWare Player are more similar.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Lawrence D'Oliveiro on Tue Dec 3 09:40:40 2024
    On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
    From what you wrote seem that ESXi is more similar to Xen than to
    KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs
    while in KVM+qemu some (frequently most) programs is running
    unvirtualized and only rest is virtualized.

    I think that dates back to the old distinction between “type 1” and “type
    2“ hypervisors. It’s an obsolete distinction nowadays.

    No.

    If you look at what is available and what it is used for then you will
    see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to All on Tue Dec 3 09:42:42 2024
    On 12/3/2024 9:33 AM, Arne Vajhøj wrote:
    On 12/2/2024 10:09 PM, Waldek Hebisch wrote:
    Matthew R. Wilson <mwilson@mattwilson.org> wrote:
    KVM is largely dependent on qemu to provide the rest of the actual
    virtual system. qemu's a great project and I run a ton of desktop VMs
    with qemu+KVM, but it just doesn't have the level of maturity or
    edge-case support that ESXi does. Pretty much any x86 operating system,
    historical or current, _just works_ in ESXi.  With qemu+KVM, you're
    going to have good success with the "big name" OSes...Windows, Linux,
    the major BSDs, etc., but you're going to be fighting with quirks and
    problems if you're trying, say, old OS/2 releases. That's not relevant
    for most people looking for virtualization solutions, and the problems
    aren't always insurmountable, but you're claiming that KVM is a "better" >>> solution, whereas in my experience, in reality, ESXi is the better
    technology.

    What you wrote is now very atypical use: faithfully implementing
    all quirks of real devices.  More typical case is guest which
    knows that it is running on a hypervisor and uses virtual
    interface with no real counterpart.  For this quality of
    virtual interfaces matters.  I do not know how ESXi compares
    to KVM, but I know that "equivalent" but different virtual
    interfaces in qemu+KVM may have markedly different performance.

    Are you talking about paravirtual drivers?

    To get back to VMS then I don't think VMS got any of those.

    Hmm. Not correct. Reading 9.2-3 installation notes:

    <quote>
    Also, two para-virtualized NICs, virtio for KVM, and VMXNET 3 for ESXi. </quote>

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Dan Cross on Tue Dec 3 09:57:31 2024
    On 11/28/2024 8:24 AM, Dan Cross wrote:
    So Goldberg defined two "types" of hypervisor in his
    dissertation: Types 1 and 2. Of course, this is an over
    simplification, and those of us who work on OSes and hypervisors
    understand that these distinctions are blurry and more on a
    continuum than hard and fast buckets, but to a first order
    approximation these categories are useful.

    Roughly, a Type-1 hypervisor is one that runs on the bare metal
    and only supports guests; usually some special guest is
    designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
    examples of Type-1 hypervisors.

    Again, roughly, a Type-2 hypervisor is one that runs in the
    context of an existing operating system, using its services and implementation for some of its functionality; examples include
    KVM (they _say_ it's type 1, but that's really not true) and
    PA1050. Usually with a Type-2 HV you've got a userspace program
    running under the host operating system that provides control
    functionality, device models, and so on. QEMU is an example of
    such a thing (sometimes, confusingly, this is called the
    hypervisor while the kernel-resident component, is called the
    Virtual Machine Monitor, or VMM), but other examples exist:
    CrosVM, for instance.

    I think the relevant distinction is that type 1 runs in the
    kernel while type 2 runs on the kernel.

    KVM runs in Linux not on Linux. Which makes it type 1.

    If VSI created a hypervisor as part of VMS then if
    it was in SYS$SYSTEM it would be a type 2 while if it
    was in SYS$LOADABLE_IMAGES it would be a type 1.

    (the location of the EXE obviously doesn't matter, but
    the location implies how it works)

    QEMU is many things. I believe it can act both
    as a CPU emulator, as a type 2 hypervisor and as
    a control program for a type 1 hypervisor (KVM).

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Tue Dec 3 15:36:04 2024
    In article <vin68p$3sjr$4@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 11/28/2024 8:24 AM, Dan Cross wrote:
    So Goldberg defined two "types" of hypervisor in his
    dissertation: Types 1 and 2. Of course, this is an over
    simplification, and those of us who work on OSes and hypervisors
    understand that these distinctions are blurry and more on a
    continuum than hard and fast buckets, but to a first order
    approximation these categories are useful.

    Roughly, a Type-1 hypervisor is one that runs on the bare metal
    and only supports guests; usually some special guest is
    designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
    examples of Type-1 hypervisors.

    Again, roughly, a Type-2 hypervisor is one that runs in the
    context of an existing operating system, using its services and
    implementation for some of its functionality; examples include
    KVM (they _say_ it's type 1, but that's really not true) and
    PA1050. Usually with a Type-2 HV you've got a userspace program
    running under the host operating system that provides control
    functionality, device models, and so on. QEMU is an example of
    such a thing (sometimes, confusingly, this is called the
    hypervisor while the kernel-resident component, is called the
    Virtual Machine Monitor, or VMM), but other examples exist:
    CrosVM, for instance.

    I think the relevant distinction is that type 1 runs in the
    kernel while type 2 runs on the kernel.

    No. They both run in supervisor mode. On x86, this is even
    necessary; the instructions to enter guest mode are privileged.

    Go back to Goldberg's dissertation; he discusses this at length.

    KVM runs in Linux not on Linux. Which makes it type 1.

    Nope. KVM is dependent on Linux at this point. The claim that
    it is a type-1 hypervisor is predicated on the idea that it was
    separable from Linux, but I don't think anyone believes that
    anymore.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Dan Cross on Tue Dec 3 10:45:45 2024
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin597$3sjr$2@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
    From what you wrote seem that ESXi is more similar to Xen than to
    KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>>> while in KVM+qemu some (frequently most) programs is running
    unvirtualized and only rest is virtualized.

    I think that dates back to the old distinction between “type 1” and “type
    2“ hypervisors. It’s an obsolete distinction nowadays.

    No.

    If you look at what is available and what it is used for then you will
    see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    No, that has nothing to do with it.

    Yes. It has.

    The question was whether the type 1 vs type 2 distinction is obsolete.

    The fact that "what is labeled type 1 is used for production and what is labeled type 2 is used for development" proves that people think it
    matters.

    So either almost everybody is wrong or it matters.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Tue Dec 3 15:55:22 2024
    In article <vin939$3sjr$5@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin597$3sjr$2@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
    From what you wrote seem that ESXi is more similar to Xen than to
    KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>>>> while in KVM+qemu some (frequently most) programs is running
    unvirtualized and only rest is virtualized.

    I think that dates back to the old distinction between “type 1” and “type
    2“ hypervisors. It’s an obsolete distinction nowadays.

    No.

    If you look at what is available and what it is used for then you will
    see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    No, that has nothing to do with it.

    Yes. It has.

    The question was whether the type 1 vs type 2 distinction is obsolete.

    As I've posted on numerous occasions, at length, citing primary
    sources, the distinction is not exact; that doesn't mean that it
    is obsolete or useless.

    The fact that "what is labeled type 1 is used for production and what is >labeled type 2 is used for development" proves that people think it
    matters.

    That seems to be something you invented: I can find no serious
    reference that suggests that what you wrote is true, so it is
    hard to see how it "proves" anything. KVM is used extensively
    in production and is a type-2 hypervisor, for example. z/VM is
    used extensively in production, and claims to be a type-2
    hypervisor (even though it more closely resembles a type-1 HV).

    So either almost everybody is wrong or it matters.

    Well, I think you are wrong, yes.

    Again, As I mentioned, and as I've posted here at length before,
    the distinction is blurly and exists on a spectrum; it is not a
    rigid thing. That doesn't imply that it is not useful, or
    obsolete.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Dan Cross on Tue Dec 3 11:03:20 2024
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin68p$3sjr$4@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 11/28/2024 8:24 AM, Dan Cross wrote:
    So Goldberg defined two "types" of hypervisor in his
    dissertation: Types 1 and 2. Of course, this is an over
    simplification, and those of us who work on OSes and hypervisors
    understand that these distinctions are blurry and more on a
    continuum than hard and fast buckets, but to a first order
    approximation these categories are useful.

    Roughly, a Type-1 hypervisor is one that runs on the bare metal
    and only supports guests; usually some special guest is
    designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
    examples of Type-1 hypervisors.

    Again, roughly, a Type-2 hypervisor is one that runs in the
    context of an existing operating system, using its services and
    implementation for some of its functionality; examples include
    KVM (they _say_ it's type 1, but that's really not true) and
    PA1050. Usually with a Type-2 HV you've got a userspace program
    running under the host operating system that provides control
    functionality, device models, and so on. QEMU is an example of
    such a thing (sometimes, confusingly, this is called the
    hypervisor while the kernel-resident component, is called the
    Virtual Machine Monitor, or VMM), but other examples exist:
    CrosVM, for instance.

    I think the relevant distinction is that type 1 runs in the
    kernel while type 2 runs on the kernel.

    Reinserted:
    # If VSI created a hypervisor as part of VMS then if
    # it was in SYS$SYSTEM it would be a type 2 while if it
    # was in SYS$LOADABLE_IMAGES it would be a type 1.


    No. They both run in supervisor mode. On x86, this is even
    necessary; the instructions to enter guest mode are privileged.

    That code does something that end up bringing the CPU in
    privileged mode does not make the code part of the kernel.

    To build on the VMS example the hypothetical type 2
    hypervisor in SYS$SYSTEM could (if properly authorized)
    call SYS$CMKRNL and do whatever. It would not become
    part of the VMS kernel from that.

    Just like VMWare Player or VirtualBox running on Windows
    is not part of the Windows kernel even if they do use CPU
    support for virtualization.

    Go back to Goldberg's dissertation; he discusses this at length.

    KVM runs in Linux not on Linux. Which makes it type 1.

    Nope. KVM is dependent on Linux at this point. The claim that
    it is a type-1 hypervisor is predicated on the idea that it was
    separable from Linux, but I don't think anyone believes that
    anymore.

    It is the opposite. KVM is type 1 not because it is separable
    from Linux but because it is inseparable from Linux.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Tue Dec 3 16:10:25 2024
    In article <vina48$3sjr$6@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin68p$3sjr$4@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 11/28/2024 8:24 AM, Dan Cross wrote:
    So Goldberg defined two "types" of hypervisor in his
    dissertation: Types 1 and 2. Of course, this is an over
    simplification, and those of us who work on OSes and hypervisors
    understand that these distinctions are blurry and more on a
    continuum than hard and fast buckets, but to a first order
    approximation these categories are useful.

    Roughly, a Type-1 hypervisor is one that runs on the bare metal
    and only supports guests; usually some special guest is
    designated as a trusted "root VM". Xen, ESXi, and Hyper-V are
    examples of Type-1 hypervisors.

    Again, roughly, a Type-2 hypervisor is one that runs in the
    context of an existing operating system, using its services and
    implementation for some of its functionality; examples include
    KVM (they _say_ it's type 1, but that's really not true) and
    PA1050. Usually with a Type-2 HV you've got a userspace program
    running under the host operating system that provides control
    functionality, device models, and so on. QEMU is an example of
    such a thing (sometimes, confusingly, this is called the
    hypervisor while the kernel-resident component, is called the
    Virtual Machine Monitor, or VMM), but other examples exist:
    CrosVM, for instance.

    I think the relevant distinction is that type 1 runs in the
    kernel while type 2 runs on the kernel.

    Reinserted:
    # If VSI created a hypervisor as part of VMS then if
    # it was in SYS$SYSTEM it would be a type 2 while if it
    # was in SYS$LOADABLE_IMAGES it would be a type 1.

    Irrelevant; this is based on your misconception of what a type-1
    hypervisor is vs a type-2.

    No. They both run in supervisor mode. On x86, this is even
    necessary; the instructions to enter guest mode are privileged.

    That code does something that end up bringing the CPU in
    privileged mode does not make the code part of the kernel.

    To build on the VMS example the hypothetical type 2
    hypervisor in SYS$SYSTEM could (if properly authorized)
    call SYS$CMKRNL and do whatever. It would not become
    part of the VMS kernel from that.

    This isn't really reelvant.

    Just like VMWare Player or VirtualBox running on Windows
    is not part of the Windows kernel even if they do use CPU
    support for virtualization.

    They rely on existing OS services for resource allocation,
    scheduling, memory management, etc, which is why they are
    type-2 HV's and not type-1. Xen, Hyper-V, and ESXi implement
    those things themselves, which is why they are type-1, and not
    type-2.

    Go back to Goldberg's dissertation; he discusses this at length.

    ^^^
    Read this part again, Arne.

    KVM runs in Linux not on Linux. Which makes it type 1.

    Nope. KVM is dependent on Linux at this point. The claim that
    it is a type-1 hypervisor is predicated on the idea that it was
    separable from Linux, but I don't think anyone believes that
    anymore.

    It is the opposite. KVM is type 1 not because it is separable
    from Linux but because it is inseparable from Linux.

    Kinda. The claim is that KVM turns Linux+KVM into a type-1
    hypervisor; that is, the entire combination becomes a the HV.
    That's sort of a silly distinction, though, since the real
    differentiator, defined by Goldberg, is whether or not the VMM
    makes use of existing system services, which KVM very much does.

    I wrote about this here, at length, several years ago. C.f., https://groups.google.com/g/comp.os.vms/c/nPYz56qulqg/m/vTDtsFNRAgAJ

    Perhaps go review that post and read the associated references.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dennis Boone@21:1/5 to All on Tue Dec 3 16:46:08 2024
    Nope. KVM is dependent on Linux at this point. The claim that
    it is a type-1 hypervisor is predicated on the idea that it was
    separable from Linux, but I don't think anyone believes that
    anymore.

    Well, the Joyent folks moved it to Illumos, so it was at least sorta
    separable. And it still works, though the community seems to have
    decided that Bhyve is better, so it will probably rot over time.

    De

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Dan Cross on Tue Dec 3 11:51:01 2024
    On 12/3/2024 11:10 AM, Dan Cross wrote:
    In article <vina48$3sjr$6@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin68p$3sjr$4@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    KVM runs in Linux not on Linux. Which makes it type 1.

    Nope. KVM is dependent on Linux at this point. The claim that
    it is a type-1 hypervisor is predicated on the idea that it was
    separable from Linux, but I don't think anyone believes that
    anymore.

    It is the opposite. KVM is type 1 not because it is separable
    from Linux but because it is inseparable from Linux.

    Kinda. The claim is that KVM turns Linux+KVM into a type-1
    hypervisor; that is, the entire combination becomes a the HV.
    That's sort of a silly distinction, though, since the real
    differentiator, defined by Goldberg, is whether or not the VMM
    makes use of existing system services, which KVM very much does.

    ESXi is basic OS functionality and virtualization services
    in a single kernel.

    Linux+KVM is basic OS functionality and virtualization services
    in a single kernel.

    They are logical working the same way.

    The differences are not in how they work, but in history
    and reusability in other contexts:
    * Linux existed before KVM
    * Linux has more functionality so it can be and is used without KVM

    But type 1 vs type 2 should depend on how it works not on
    history and reusability in other contexts.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Dan Cross on Tue Dec 3 11:26:42 2024
    On 12/3/2024 10:55 AM, Dan Cross wrote:
    In article <vin939$3sjr$5@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin597$3sjr$2@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
    From what you wrote seem that ESXi is more similar to Xen than to >>>>>> KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>>>>> while in KVM+qemu some (frequently most) programs is running
    unvirtualized and only rest is virtualized.

    I think that dates back to the old distinction between “type 1” and “type
    2“ hypervisors. It’s an obsolete distinction nowadays.

    No.

    If you look at what is available and what it is used for then you will >>>> see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    No, that has nothing to do with it.

    Yes. It has.

    The question was whether the type 1 vs type 2 distinction is obsolete.

    As I've posted on numerous occasions, at length, citing primary
    sources, the distinction is not exact; that doesn't mean that it
    is obsolete or useless.

    The post I was replying to called it obsolete. So that was the topic
    of my post.

    The fact that "what is labeled type 1 is used for production and what is
    labeled type 2 is used for development" proves that people think it
    matters.

    That seems to be something you invented: I can find no serious
    reference that suggests that what you wrote is true,

    Is is your experience that people do their development on ESXi/KVM
    and run their production on VMWare Player/VirtualBox?

    :-)

    People do development on VMWare Player/VirtualBox and run
    production on ESXi/KVM.

    so it is
    hard to see how it "proves" anything. KVM is used extensively
    in production and is a type-2 hypervisor, for example.

    When I wrote "is labeled" I am talking about what the
    authors and the industry in general are calling it.

    In that sense KVM is a labeled a type 1 hypervisor. I can
    find Redhat links if you don't believe me.

    That you consider it to be type 2 does not really matter.

    z/VM is
    used extensively in production, and claims to be a type-2
    hypervisor (even though it more closely resembles a type-1 HV).

    True.

    The type 1 for production and type 2 for development does
    not hold in the mainframe world.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Tue Dec 3 17:08:43 2024
    In article <vinctl$3sjq$1@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 11:10 AM, Dan Cross wrote:
    In article <vina48$3sjr$6@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin68p$3sjr$4@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    KVM runs in Linux not on Linux. Which makes it type 1.

    Nope. KVM is dependent on Linux at this point. The claim that
    it is a type-1 hypervisor is predicated on the idea that it was
    separable from Linux, but I don't think anyone believes that
    anymore.

    It is the opposite. KVM is type 1 not because it is separable
    from Linux but because it is inseparable from Linux.

    Kinda. The claim is that KVM turns Linux+KVM into a type-1
    hypervisor; that is, the entire combination becomes a the HV.
    That's sort of a silly distinction, though, since the real
    differentiator, defined by Goldberg, is whether or not the VMM
    makes use of existing system services, which KVM very much does.

    ESXi is basic OS functionality and virtualization services
    in a single kernel.

    Yes, but it doesn't do much other than run VMs and support those
    VMs.

    Linux+KVM is basic OS functionality and virtualization services
    in a single kernel.

    Yes, but it does much more than just run VMs. For example, I
    could run, say, an instance of an RDBMS on the same host as I
    run a VM. Linux, as a kernel, is separable from KVM; KVM, as
    a module, is not seperable from Linux.

    They are logical working the same way.

    Funny how this is the inverse of what you tried to argument
    in https://groups.google.com/g/comp.os.vms/c/nPYz56qulqg/m/LN-xzlJ1AwAJ,
    where you wrote:

    The differences are not in how they work, but in history
    and reusability in other contexts:
    * Linux existed before KVM
    * Linux has more functionality so it can be and is used without KVM

    Yes, and that's the distinction Goldberg defined.

    But type 1 vs type 2 should depend on how it works not on
    history and reusability in other contexts.

    Like I said, the terminology is imprecise.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to Dennis Boone on Tue Dec 3 17:09:30 2024
    In article <LXydneGd95xNqNL6nZ2dnZfqn_GdnZ2d@giganews.com>,
    Dennis Boone <drb@ihatespam.msu.edu> wrote:
    Nope. KVM is dependent on Linux at this point. The claim that
    it is a type-1 hypervisor is predicated on the idea that it was
    separable from Linux, but I don't think anyone believes that
    anymore.

    Well, the Joyent folks moved it to Illumos, so it was at least sorta >separable. And it still works, though the community seems to have
    decided that Bhyve is better, so it will probably rot over time.

    They did, and they decided that it was too much hassle and that
    keeping Bhyve running was easier.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Tue Dec 3 17:36:39 2024
    In article <vinbg2$3sjr$7@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 10:55 AM, Dan Cross wrote:
    In article <vin939$3sjr$5@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 10:36 AM, Dan Cross wrote:
    In article <vin597$3sjr$2@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
    From what you wrote seem that ESXi is more similar to Xen than to >>>>>>> KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs
    while in KVM+qemu some (frequently most) programs is running
    unvirtualized and only rest is virtualized.

    I think that dates back to the old distinction between “type 1” and “type
    2“ hypervisors. It’s an obsolete distinction nowadays.

    No.

    If you look at what is available and what it is used for then you will >>>>> see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    No, that has nothing to do with it.

    Yes. It has.

    The question was whether the type 1 vs type 2 distinction is obsolete.

    As I've posted on numerous occasions, at length, citing primary
    sources, the distinction is not exact; that doesn't mean that it
    is obsolete or useless.

    The post I was replying to called it obsolete. So that was the topic
    of my post.

    Yes. I somewhat agree; I just think your argument is predicated
    on falsehoods. I don't disagree with your conclusion, I
    disagree with your framing.

    The fact that "what is labeled type 1 is used for production and what is >>> labeled type 2 is used for development" proves that people think it
    matters.

    That seems to be something you invented: I can find no serious
    reference that suggests that what you wrote is true,

    Is is your experience that people do their development on ESXi/KVM
    and run their production on VMWare Player/VirtualBox?

    Some people do, yes. Many others run production workloads on
    Bhyve and KVM.

    :-)

    People do development on VMWare Player/VirtualBox and run
    production on ESXi/KVM.

    Some people do. Some people do development on z/VM and deploy
    on z/VM. Some do development on bare metal and deploy on KVM or
    Bhyve (or z/VM).

    Some people do development on VMs hosted on ESXi.

    hard to see how it "proves" anything. KVM is used extensively
    in production and is a type-2 hypervisor, for example.

    When I wrote "is labeled" I am talking about what the
    authors and the industry in general are calling it.

    I see no evidence for, and plenty of contradicting inf

    In that sense KVM is a labeled a type 1 hypervisor. I can
    find Redhat links if you don't believe me.

    I know. I already said that it was claimed that it was a type-1
    HV. Here, I'll save you the trouble of finding the RedHat link: https://www.redhat.com/en/topics/virtualization/what-is-a-hypervisor

    Here's the relevant section:

    |A type 1 hypervisor, also referred to as a native or bare metal
    |hypervisor, runs directly on the host's hardware to manage
    |guest operating systems. It takes the place of a host operating
    |system and VM resources are scheduled directly to the hardware
    |by the hypervisor.

    Yes, that's Goldberg's definition.

    |This type of hypervisor is most common in an enterprise data
    |center or other server-based environments.

    Ok, sure; that's marketing speak, but whatever.

    |KVM, Microsoft Hyper-V, and VMware vSphere are examples of a
    |type 1 hypervisor. KVM was merged into the Linux kernel in
    |2007, so if you're using a modern version of Linux, you already
    |have access to KVM.

    Here's the problem. How does KVM match the definition of a
    type-1 hypervisor listed above? In particular, we know that it
    delegates the functionality for resource management and
    scheduling to Linux. Indeed, actually causing a VCPU to run is
    done by executing a system call from a userspace process using
    e.g. QEMU or CrosVM or Firecracker or some other userspace HV
    component.

    It then goes on to say:

    |A type 2 hypervisor is also known as a hosted hypervisor, and
    |is run on a conventional operating system as a software layer
    |or application.

    Yup. That's exactly what KVM does.

    So yes. RedHat calls KVM a type-1 hypervisor, but that doesn't
    make it so. The industry writ large commonly accepts it as a
    type-2 HV.

    That you consider it to be type 2 does not really matter.

    Not just me. Do a literature search and tell me what the
    consensus is about whether KVM is a type-1 or type-2 hypervisor.

    Here's an example from the book "Harda and Software Support for Virtualization", by Edouard Bugnion, Jason Nieh and Dan Tsafrir
    (Morgan and Claypool, 2017). From page 7:

    |We note that the emphasisis on resource allocation, and not
    |whether the hypervisor runs in privileged or non-privileged
    |mode. In particular, a hypervisor can be a type-2 even when it
    |runs in kernel-mode, e.g., Linux/KVM and VMware Workstation
    |operate this way. In fact, Goldberg assumed that the
    |hypervisor would always be executing with supervisor
    |privileges.

    In fact, we can go deeper. If we go back to the 2007 KVM paper
    by Kitty et al from the Ottawa Linux Symposium (https://www.kernel.org/doc/ols/2007/ols2007v1-pages-225-230.pdf)
    we can see this text in the abstract:

    |The Kernel-based Virtual Machine, or kvm, is a new Linux
    |subsystem which leverages these virtualization extensions to
    |add a virtual machine monitor (or hypervisor) capability to
    |Linux. Using kvm, one can create and run multiple virtual
    |machines. These virtual machines appear as normal Linux
    |processes and integrate seamlessly with the rest of the system.

    This is precisely what type-2 hypervisors do. Note also this,
    from section 3 of that paper:

    |Under kvm, virtual machines are created by opening a device
    |node (/dev/kvm.) A guest has its ownmemory, separate from the
    |userspace process that created it. A virtual cpu is not
    |scheduled on its own, however.

    So we see that guests are created by opening a device file, and
    furthermore, that VCPU scheduling is not done by KVM (an
    important criteria for a type-1 hypervisor is that it handles
    VCPU scheduling). And while a guest does own its own memory,
    inspection of the KVM implementation shows that this is done by
    using memory primitives provided by Linux.

    So despite what some intro text on a RedHat web page says, KVM
    does not meet any of the criteria for being a type-1 HV, while
    it does meet the criteria for being a type-2 HV.

    used extensively in production, and claims to be a type-2
    hypervisor (even though it more closely resembles a type-1 HV).

    True.

    The type 1 for production and type 2 for development does
    not hold in the mainframe world.

    It doesn't really hold anywhere; that's the point.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Tue Dec 3 20:24:19 2024
    On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:

    If you look at what is available and what it is used for then you will
    see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    What people discovered was, they needed to run full-fat system management suites, reporting tools, backup/maintenance tools etc on the hypervisor.
    In other words, all the regular filesystem-management functions you need
    on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
    cut it any more -- virtualization is nowadays done on full-function Linux kernels (all “type 2”).

    That’s why the distinction is obsolete.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Lawrence D'Oliveiro@21:1/5 to All on Tue Dec 3 20:27:48 2024
    On Tue, 3 Dec 2024 09:57:31 -0500, Arne Vajhøj wrote:

    I think the relevant distinction is that type 1 runs in the kernel while
    type 2 runs on the kernel.

    <https://en.wikipedia.org/wiki/Hypervisor>:

    Type-1, native or bare-metal hypervisors
    These hypervisors run directly on the host's hardware to
    control the hardware and to manage guest operating systems.
    For this reason, they are sometimes called bare-metal
    hypervisors. The first hypervisors, which IBM developed in the
    1960s, were native hypervisors.[8] These included the test
    software SIMMON and the CP/CMS operating system, the
    predecessor of IBM's VM family of virtual machine operating
    systems. Examples of Type-1 hypervisor include Hyper-V, Xen
    and VMware ESXi.

    Type-2 or hosted hypervisors
    These hypervisors run on a conventional operating system (OS)
    just as other computer programs do. A virtual machine monitor
    runs as a process on the host, such as VirtualBox. Type-2
    hypervisors abstract guest operating systems from the host
    operating system, effectively creating an isolated system that
    can be interacted with by the host. Examples of Type-2
    hypervisor include VirtualBox and VMware Workstation.

    The distinction between these two types is not always clear. For
    instance, KVM and bhyve are kernel modules[9] that effectively
    convert the host operating system to a type-1 hypervisor.[10]

    I would say those examples contradict the definitions, since Linux with
    KVM is very much a “conventional OS”, and the same would be true of the BSDs.

    But then again, that just reinforces the point that the distinction is obsolete.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Lawrence D'Oliveiro on Tue Dec 3 19:16:26 2024
    On 12/3/2024 3:24 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:

    If you look at what is available and what it is used for then you will
    see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    What people discovered was, they needed to run full-fat system management suites, reporting tools, backup/maintenance tools etc on the hypervisor.
    In other words, all the regular filesystem-management functions you need
    on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
    cut it any more -- virtualization is nowadays done on full-function Linux kernels (all “type 2”).

    Having a full host OS is very nice for a development system with a few
    VM's to build and test various stuff.

    It does not scale to a large production environment. For that you need
    central management servers.

    ESXi has the vSphere suite of products. For many years the basic ESXi
    was actually free and customers only paid for the advanced vSphere
    stuff.

    For KVM there are many products to choose from. Redhat has
    Redhat OpenShift Virtualization (it used to be Redhat Virtualization,
    but it came under the OpenShift umbrella when containers took
    off). The big cloud vendors that may be managing millions of
    servers must have some custom tools for that. You gave a link
    to someone switching to the OpenNebula product. Proxmox VE is
    another option. Lots of different products with different
    feature sets to match different requirements.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Wed Dec 4 00:41:41 2024
    In article <vio70q$e1fp$1@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 3:24 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:

    If you look at what is available and what it is used for then you will
    see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    What people discovered was, they needed to run full-fat system management
    suites, reporting tools, backup/maintenance tools etc on the hypervisor.
    In other words, all the regular filesystem-management functions you need
    on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
    cut it any more -- virtualization is nowadays done on full-function Linux
    kernels (all “type 2”).

    Having a full host OS is very nice for a development system with a few
    VM's to build and test various stuff.

    It does not scale to a large production environment. For that you need >central management servers.

    There are some very senior engineers at Google and Amazon who
    run the largest VM-based production environments on the planet
    and they disagree. There, VMs run under a "full host OS."

    ESXi has the vSphere suite of products. For many years the basic ESXi
    was actually free and customers only paid for the advanced vSphere
    stuff.

    For KVM there are many products to choose from. Redhat has
    Redhat OpenShift Virtualization (it used to be Redhat Virtualization,
    but it came under the OpenShift umbrella when containers took
    off). The big cloud vendors that may be managing millions of
    servers must have some custom tools for that. You gave a link
    to someone switching to the OpenNebula product. Proxmox VE is
    another option. Lots of different products with different
    feature sets to match different requirements.

    It's unclear what you think that KVM is. KVM requires a
    userspace component to actually drive the VCPUs; that runs under
    Linux, which is a "full host OS." At least Google uses the same
    management tools to drive those processes as it uses for the
    rest of its production services (e.g., borg, etc). The
    userspace component for GCP is not QEMU, but rather, a Google
    authored program. However, it is in all-respects just another
    google3 binary.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Dan Cross on Tue Dec 3 19:50:55 2024
    On 12/3/2024 7:41 PM, Dan Cross wrote:
    In article <vio70q$e1fp$1@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 3:24 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:
    If you look at what is available and what it is used for then you will >>>> see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    What people discovered was, they needed to run full-fat system management >>> suites, reporting tools, backup/maintenance tools etc on the hypervisor. >>> In other words, all the regular filesystem-management functions you need >>> on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
    cut it any more -- virtualization is nowadays done on full-function Linux >>> kernels (all “type 2”).

    Having a full host OS is very nice for a development system with a few
    VM's to build and test various stuff.

    It does not scale to a large production environment. For that you need
    central management servers.

    There are some very senior engineers at Google and Amazon who
    run the largest VM-based production environments on the planet
    and they disagree. There, VMs run under a "full host OS."

    You totally missed the point.

    With KVM they do have a full host OS.

    But they don't need it to "run full-fat system management
    suites, reporting tools, backup/maintenance tools etc on
    the hypervisor", because they don't manage all those VM's
    that way. That would be impossible.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Dan Cross on Tue Dec 3 20:05:22 2024
    On 12/3/2024 7:41 PM, Dan Cross wrote:
    In article <vio70q$e1fp$1@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    ESXi has the vSphere suite of products. For many years the basic ESXi
    was actually free and customers only paid for the advanced vSphere
    stuff.

    For KVM there are many products to choose from. Redhat has
    Redhat OpenShift Virtualization (it used to be Redhat Virtualization,
    but it came under the OpenShift umbrella when containers took
    off). The big cloud vendors that may be managing millions of
    servers must have some custom tools for that. You gave a link
    to someone switching to the OpenNebula product. Proxmox VE is
    another option. Lots of different products with different
    feature sets to match different requirements.

    It's unclear what you think that KVM is. KVM requires a
    userspace component to actually drive the VCPUs; that runs under
    Linux, which is a "full host OS." At least Google uses the same
    management tools to drive those processes as it uses for the
    rest of its production services (e.g., borg, etc). The
    userspace component for GCP is not QEMU, but rather, a Google
    authored program. However, it is in all-respects just another
    google3 binary.

    That is the general model.

    central management server---(network)---management agent---hypervisor

    Details can vary but that is the only way to manage at scale.

    And which is why the claim that the hypervisor has to come with
    a full host OS does not hold water for large production
    environments.

    They just need the very basic OS, the virtualization service
    and the agent.

    Google could tailor down the Linux KVM they use to the very
    minimum if they wanted to. But I have no idea if they have
    actually bothered doing so.

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Wed Dec 4 01:20:14 2024
    In article <vio91g$e1fq$1@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 7:41 PM, Dan Cross wrote:
    In article <vio70q$e1fp$1@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 3:24 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 09:40:40 -0500, Arne Vajhøj wrote:
    If you look at what is available and what it is used for then you will >>>>> see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    What people discovered was, they needed to run full-fat system management >>>> suites, reporting tools, backup/maintenance tools etc on the hypervisor. >>>> In other words, all the regular filesystem-management functions you need >>>> on any server machine. So having it be a cut-down kernel (“type 1”) didn’t
    cut it any more -- virtualization is nowadays done on full-function Linux >>>> kernels (all “type 2”).

    Having a full host OS is very nice for a development system with a few
    VM's to build and test various stuff.

    It does not scale to a large production environment. For that you need
    central management servers.

    There are some very senior engineers at Google and Amazon who
    run the largest VM-based production environments on the planet
    and they disagree. There, VMs run under a "full host OS."

    You totally missed the point.

    With KVM they do have a full host OS.

    But they don't need it to "run full-fat system management
    suites, reporting tools, backup/maintenance tools etc on
    the hypervisor", because they don't manage all those VM's
    that way. That would be impossible.

    Actually, they do.

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dan Cross@21:1/5 to arne@vajhoej.dk on Wed Dec 4 01:25:36 2024
    In article <674faad2$0$705$14726298@news.sunsite.dk>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    On 12/3/2024 7:41 PM, Dan Cross wrote:
    In article <vio70q$e1fp$1@dont-email.me>,
    Arne Vajhøj <arne@vajhoej.dk> wrote:
    ESXi has the vSphere suite of products. For many years the basic ESXi
    was actually free and customers only paid for the advanced vSphere
    stuff.

    For KVM there are many products to choose from. Redhat has
    Redhat OpenShift Virtualization (it used to be Redhat Virtualization,
    but it came under the OpenShift umbrella when containers took
    off). The big cloud vendors that may be managing millions of
    servers must have some custom tools for that. You gave a link
    to someone switching to the OpenNebula product. Proxmox VE is
    another option. Lots of different products with different
    feature sets to match different requirements.

    It's unclear what you think that KVM is. KVM requires a
    userspace component to actually drive the VCPUs; that runs under
    Linux, which is a "full host OS." At least Google uses the same
    management tools to drive those processes as it uses for the
    rest of its production services (e.g., borg, etc). The
    userspace component for GCP is not QEMU, but rather, a Google
    authored program. However, it is in all-respects just another
    google3 binary.

    That is the general model.

    central management server---(network)---management agent---hypervisor

    Details can vary but that is the only way to manage at scale.

    If all you want to run on your host is VMs, maybe.

    And which is why the claim that the hypervisor has to come with
    a full host OS does not hold water for large production
    environments.

    Define "full host OS." My definition is a fully functinal,
    general purpose operating system, with a full complement of
    userspace tools, plus whatever applications the environment
    it is running in require for management and maintenance. This
    includes the job scheduling daemon, but also system monitoring,
    binary copies, upgrade agents, watchdogs, etc. In this case,
    we're talking about Linux. In the Google environment, that's
    the Google version of the kernel (prodkernel or increasingly
    icebreaker: https://lwn.net/Articles/871195/), plus a set of
    packages providing the usual complement of Unix-y command line
    tools, borglet, the monitoring daemon, and a number of other
    custom daemons.

    They just need the very basic OS, the virtualization service
    and the agent.

    Not how they do it.

    Google could tailor down the Linux KVM they use to the very
    minimum if they wanted to. But I have no idea if they have
    actually bothered doing so.

    They have not, nor would they. There is substantial benefit at
    Google scale to having a basic "node" architecture that's more
    or less indistinguishable between the systems that run, say,
    GMail and those that run GCP. Plus all of Google's internal
    services (globe-spanning distributed filesystems, databases,
    etc).

    - Dan C.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dave Froble@21:1/5 to All on Tue Dec 3 23:01:46 2024
    On 12/3/2024 9:40 AM, Arne Vajhøj wrote:
    On 12/2/2024 11:57 PM, Lawrence D'Oliveiro wrote:
    On Tue, 3 Dec 2024 03:09:15 -0000 (UTC), Waldek Hebisch wrote:
    From what you wrote seem that ESXi is more similar to Xen than to
    KVM+qemu, that is ESXi and Xen discourage running unvirtualized programs >>> while in KVM+qemu some (frequently most) programs is running
    unvirtualized and only rest is virtualized.

    I think that dates back to the old distinction between “type 1” and “type
    2“ hypervisors. It’s an obsolete distinction nowadays.

    No.

    If you look at what is available and what it is used for then you will
    see that what is labeled type 1 is used for production and what is
    labeled type 2 is used for development. It matters.

    Arne


    Is that a hard rule? I doubt it.

    Though, some may feel that the "type 1" (whatever that really is, or matters) might be a bit safer ...

    --
    David Froble Tel: 724-529-0450
    Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
    DFE Ultralights, Inc.
    170 Grimplin Road
    Vanderbilt, PA 15486

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Clubley@21:1/5 to John Dallman on Wed Dec 4 13:20:55 2024
    On 2024-12-02, John Dallman <jgd@cix.co.uk> wrote:
    In article <vil9jg$3ives$3@dont-email.me>, ldo@nz.invalid (Lawrence D'Oliveiro) wrote:

    . . . a company which switched from VMware to an open-source
    alternative as a result of Broadcom's massive price hikes,
    and encountered an unexpected benefit: the resources consumed
    by system management overhead on the new product were so much
    less, they could run more VMs on the same hardware.

    That will be nice if it happens, but the pricing is a fully sufficient
    reason for moving. The way that some companies are seeing 1,000%, while others see 300% or 500% makes customers very suspicious that Broadcom are trying to jack up the price as much as each customer will take. If so,
    they aren't very good at that.

    My employer was given a special one-off offer of 500% and went "Hell,
    no!"


    Are you sure your employer's response was not a little more Anglo-Saxon
    in nature ? :-)

    On a more serious note, does anyone else think Broadcom are showing absolute contempt towards their users ? It reminds me of the person who took over
    supply of a vital medical drug in the US a few years ago and promptly
    increased the price massively because the users of the drug where a capture market that _needed_ to buy the drug.

    This is so blatant by Broadcom, I'm surprised the EU has not got more
    seriously involved.

    Simon.

    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Clubley@21:1/5 to Simon Clubley on Wed Dec 4 13:44:07 2024
    On 2024-12-04, Simon Clubley <clubley@remove_me.eisner.decus.org-Earth.UFP> wrote:

    Are you sure your employer's response was not a little more Anglo-Saxon
    in nature ? :-)

    On a more serious note, does anyone else think Broadcom are showing absolute contempt towards their users ? It reminds me of the person who took over supply of a vital medical drug in the US a few years ago and promptly increased the price massively because the users of the drug where a capture

    s/where a capture/were a captive/

    Sorry.

    market that _needed_ to buy the drug.

    This is so blatant by Broadcom, I'm surprised the EU has not got more seriously involved.

    Simon.



    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From =?UTF-8?Q?Arne_Vajh=C3=B8j?=@21:1/5 to Simon Clubley on Wed Dec 4 10:44:16 2024
    On 12/4/2024 8:20 AM, Simon Clubley wrote:
    On a more serious note, does anyone else think Broadcom are showing absolute contempt towards their users ?

    I am expecting companies to attempt to maximize profit.

    That expectation tend to minimize disappointment. :-)

    Question is of course whether Broadcom is maximizing profit!

    The pricing strategy seems to be to cash in now and not worry about
    long term as opposed to try and setup a long term steady income.

    Given the move to containers and cloud then I actually think that it may
    be a profit maximizing strategy. With a shrinking market then the value
    of long term is not so big.

    But that raises another question: why did they pay so much? The price
    hikes may be profit maximizing, but it will not bring in what they paid
    for VMWare.

    It reminds me of the person who took over
    supply of a vital medical drug in the US a few years ago and promptly increased the price massively because the users of the drug where a capture market that _needed_ to buy the drug.

    This guy:

    https://en.wikipedia.org/wiki/Martin_Shkreli

    ?

    Arne

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Simon Clubley@21:1/5 to arne@vajhoej.dk on Thu Dec 5 13:21:59 2024
    On 2024-12-04, Arne Vajh°j <arne@vajhoej.dk> wrote:
    On 12/4/2024 8:20 AM, Simon Clubley wrote:
    On a more serious note, does anyone else think Broadcom are showing absolute >> contempt towards their users ?

    I am expecting companies to attempt to maximize profit.

    That expectation tend to minimize disappointment. :-)

    Question is of course whether Broadcom is maximizing profit!

    The pricing strategy seems to be to cash in now and not worry about
    long term as opposed to try and setup a long term steady income.

    Given the move to containers and cloud then I actually think that it may
    be a profit maximizing strategy. With a shrinking market then the value
    of long term is not so big.

    But that raises another question: why did they pay so much? The price
    hikes may be profit maximizing, but it will not bring in what they paid
    for VMWare.


    I wonder if their level of arrogance exceeded their level of competence ?

    It reminds me of the person who took over
    supply of a vital medical drug in the US a few years ago and promptly
    increased the price massively because the users of the drug where a capture >> market that _needed_ to buy the drug.

    This guy:

    https://en.wikipedia.org/wiki/Martin_Shkreli


    Yes. I wonder how much more of this the US is in for over the next 4 years. :-(

    Simon.

    --
    Simon Clubley, clubley@remove_me.eisner.decus.org-Earth.UFP
    Walking destinations on a map are further away than they appear.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Dave Froble@21:1/5 to Simon Clubley on Thu Dec 5 22:16:31 2024
    On 12/4/2024 8:20 AM, Simon Clubley wrote:
    On 2024-12-02, John Dallman <jgd@cix.co.uk> wrote:
    In article <vil9jg$3ives$3@dont-email.me>, ldo@nz.invalid (Lawrence
    D'Oliveiro) wrote:

    . . . a company which switched from VMware to an open-source
    alternative as a result of Broadcom's massive price hikes,
    and encountered an unexpected benefit: the resources consumed
    by system management overhead on the new product were so much
    less, they could run more VMs on the same hardware.

    That will be nice if it happens, but the pricing is a fully sufficient
    reason for moving. The way that some companies are seeing 1,000%, while
    others see 300% or 500% makes customers very suspicious that Broadcom are
    trying to jack up the price as much as each customer will take. If so,
    they aren't very good at that.

    My employer was given a special one-off offer of 500% and went "Hell,
    no!"


    Are you sure your employer's response was not a little more Anglo-Saxon
    in nature ? :-)

    On a more serious note, does anyone else think Broadcom are showing absolute contempt towards their users ? It reminds me of the person who took over supply of a vital medical drug in the US a few years ago and promptly increased the price massively because the users of the drug where a capture market that _needed_ to buy the drug.

    That action did not stand. Forget the actual result. Such activity is in need of feathers, rail, tar, and a rope.

    Thing is, he didn't do anything illegal.

    This is so blatant by Broadcom, I'm surprised the EU has not got more seriously involved.

    Simon.


    The key issue is whether Broadcom can at least recover their investment. Many will be pleased if they fail to do so.

    My bet is that the pricing might get some adjustments, should enough users refuse the high prices. They will find a price that sticks with enough users. The problem with that is that enough users will accept some compromise.

    Or maybe they need a large tax write-off ...

    --
    David Froble Tel: 724-529-0450
    Dave Froble Enterprises, Inc. E-Mail: davef@tsoft-inc.com
    DFE Ultralights, Inc.
    170 Grimplin Road
    Vanderbilt, PA 15486

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)