• Swap - No Root

    From Sunta Bivings@bivingssunta@gmail.com to comp.lang.mumps on Thu Jan 18 08:35:58 2024
    From Newsgroup: comp.lang.mumps

    I'm new to Linux and I have dual booted my Windows 10 laptop with Ubuntu. During the installation, I remember making 2 partitions /root and /home but when using Ubuntu, I noticed it's just one drive there. I can't seem to understand this. What are these above-mentioned partitions for and what should be the size (according to their use) of these partitions? Any help is appreciated. Thanks
    swap - no root
    Download Zip https://t.co/ImocFx0S6A
    I'm using Ubuntu 22.04.2 LTS. Below is my current partition table configuration. Initially, I allocated 37GB for the root partition and left the remaining space for the home partition. However, I have realized that the root partition requires more space nowadays.
    One mistake I made was not creating an extended partition, which complicates the resizing process. My plan now is to shrink the home partition and use that space to extend the root partition. However, the swap partition is located in between, posing a challenge.
    P.SThe reason I used a seperate home partition is to reinstall OS in case of a catastrophy but to keep the user data safe but in this point I really would like to avoid that and just make space in the root partition.
    I was wondering how to arrange the order of partitions of root, home and swap, i.e. which is on the left just besides one Windows partition, which is in the middle and which is on the far right? Is there some consideration regarding about this arrangement?
    It doesn't matter where your partitions are. You can have root be on a primary or logical partition. You can have root, home, and boot all smushed together on one partition, or micromanage all your folders to different partitions (you might hit a built-in limit for logical partitions (59?), and you're only allowed four primary). I think you could even have all your partitions running on a networked file share.
    I've always partitioned boot as my first logical partition, because I know I'm only going to give it 100 mB. Then I think about making a swap, and usually decide against it. Next I decide how much space I'll give up to root. Finally, I give home the rest of the space. If I'm not dual booting Windows, root gets to be in a primary partition. Everything else gets logical partitions.
    I think there are sensible reasons to want to encrypt the root filesystem and perhaps even more so the swap partition. Especially for swap this is true: Sensitive information like passwords or anything that ever made it's way into memory can get written to disk for whatever reason. I am sure people will have different opinions on the severity of this issue and whether it is worth going thru the trouble of encrypting your swap partition. Such discussions have been had many times over and it is not the purpose of this post to start yet another. I would be the first person to admit that most people probably do not need to worry about it. Choosing the appropriate security measure ultimately depends on balancing convenience against security and also a sensible threat model. Refer to this excellent guide by the EFF for further considerations on how to approach this. I also have no shame in admitting that I am in no way an expert on any of these issues and will leave it, if for that reason alone, at what I have said on the issue so far.
    NOPE. You aren't running a health care system with PHI on it. Why does the OS need to be encrypted for a home NAS? /tmp is already tmpfs which would be gone just by shutting the system off. You can move swap to a different disk that is encrypted. If you are running "sensitive" apps in docker and docker's root was on a different disk, nothing would be in /var/log/. This is just too damn painful. Take it from someone who maintains plenty of healthcare systems with the OS encrypted.
    About sharerootfs plugin. I was so far hoping that my changes are in a sense on a lower level than OMV and it's plugins operate. Or to put it differently, I was hoping that OMV could be agnostic towards whether it's filesystem was encrypted or not since the encryption is already built into Debian natively. From what you say it appears this assumption is mistaken.
    When Microsoft SharePoint is set up for an organization, a root (or top-level) site is created. Before April 2019, the site was created as a classic team site. Now, a communication site is set up as the root site for new organizations. If your environment was set up before April 2019, you can modernize your root site in three ways:
    Before you launch an intranet landing page at your root site location, we strongly encourage you to review the guidance about launching healthy portals.
    Some functionality is introduced gradually to organizations that have opted in to the Targeted release option in Microsoft 365. This means that you might not yet see some features described in this article, or they might look different.
    The root site for your organization is one of the sites that's provisioned automatically when you purchase and set up a Microsoft 365 or Microsoft 365 plan that includes SharePoint. The URL of this site is typically contoso.sharepoint.com, the default name is "Communication site," and the owner is Company Administrator (all Global Administrators in the organization). The root site can't be connected to a Microsoft 365 group.
    After you replace the root site, content must be recrawled to update the search index. This might take some time depending on factors such as the amount of content in these sites. Anything dependent on the search index might return incomplete results until the sites have been recrawled.
    I found a simple tutorial (I haven't tried it yet), but I wonder if it is healthy to partition / and /swap only. On the other hand, another tutorial simply doesn't work even it have four partitions: /, /boot, /swap, and /home.
    Nowadays using different filesystems for different root-directories is more or less a matter of taste. It could be a safety plus if panic running daemons or applications filling /var could not garbage the whole disk. In former times there had been different partitions for /, /usr, /var, /opt, /home etc pp. Making /boot a standalone small partition with f.e. 512MB is anyway a not bad idea because the kernel-place is isolated and a corrupt / or /home will still let you boot into a rescue system.
    I have an EC2 instance with "instance store" device as a root device.Now, I would like to attach an EBS volume to that same instance,only that I want it to be the root device.Is that possible?What happens to the instance store device in such case?
    This can be done without creating a new AMI and without launching a new instance. When it's done the original root volume stays attached on /dev/sda1 (or wherever it was originally mounted. /dev/sda1 is the default for many AMIs). The original root volume will not be mounted to the filesystem - you'd need to do that yourself via the "mount" command.
    The technique requires the recent Ubuntu kernels, the ones that run in their 10.04 and 10.10 releases. Check out alestic.com for the most recent AMI IDs for these Ubuntu releases. These recent kernels are configured to boot from any attached device whose volume label is "uec-rootfs". If you are running one of these kernels all you need to do is to change the volume label of the current (instance-store) root volume to something else, change the volume label of the new root to uec-rootfs, and then reboot. If you're not running one of these kernels, you can't use this technique.
    First you would attach the EBS volume you want to act as the new root to one of the available devices /dev/sdf../dev/sdp. This can be done either with direct EC2 API calls, with the EC2 Command Line API tools (ec2-attach-volume), or with a library such as boto, or via the AWS Management Console UI.
    To test this you should create an EBS volume that you know will boot properly. I like to do that by snapshotting the root volume of the EBS-backed AMIs from those above mentioned Ubuntu AMIs. From that snapshot you can create a new, bootable EBS volume in any Availability Zone. Make sure you can tell the difference between the running instance's original root volume and the new EBS root volume - before you run the reroot procedure above you can put in a "marker" file on the old root volume:
    ec2-register --snapshot snap-9eb4ecf6 --architecture i386 --name "Zenoss Enterprise 3.0 beta 2 on centOS" --description "This is from an install of zenoss core beta 1 and zenoss enterprise beta 2, both of version 3.0 (or internally 2.5.70 217). An ebs block device was attached, and the file system rsynced over, then ebs was snapshotted and this is basedd off that." --root-device-name /dev/sda1 --kernel aki-9b00e5f2
    Hi first i wanna say im new to linux anyway.
    Im coming from default arch and on that install i had a 1gb Fat32 partition a 96GiB swap a 35GiB root and a 1.7 TiB home. But when i set that up on endeavouros when i restarted my pc and went to find the boot manager it just wasnt there the USB was so i booted back into the USB. Once i was on the liveuser for the USB I went and looked in the file explorer and i found that the partitions were created and had data in them but i just cant boot to the boot partition. Right now i just installed by wiping my whole disk but i dont want to have my root and home together and want a much larger swap so i want to reinstall with the setup i said at the start but as i said when i try to do that it just doesnt boot.
    The way to grow the root partition on your File Fabric depends on what version of the File Fabric was originally installed. To determine whether these instructions apply to your File Fabric, please consult this page carefully.
    In our example above the linux-swap partition is partition 3 and is the last partition on this disk. If your linux-swap partition is not the last partition or not listed, STOP at this point and contact support. You are most likely not following the proper set of instructions.
    The original root volume is detached from the instance, and the new root volume is attached to the instance in its place. The instance's block device mapping is updated to reflect the ID of the replacement root volume. You can choose whether or not to keep the original root volume after the root volume replacement process has completed. If you choose delete the original root volume after the replacement process completes, the original root volume is automatically deleted and becomes unrecoverable. If you choose to keep the original root volume after the process completes, the volume remains provisioned in your account; you must manually delete it when you no longer need it.
    f448fe82f3
    --- Synchronet 3.21d-Linux NewsLink 1.2