From Newsgroup: rec.music.classical
<div>Note: If the volume does not show "Started", the files under/var/log/glusterfs/glusterd.log should be checked in order to debug anddiagnose the situation. These logs can be looked at on one or, all theservers configured.</div><div></div><div></div><div></div><div></div><div></div><div>download glusterfs</div><div></div><div>Download Zip:
https://t.co/KgHctjGfmu </div><div></div><div></div><div>From server 1: I run sudo glusterfs peer probe server2 and it is added to the cluster. There was no questions asked. I did nothing to tell server2 to allow server1 to add it to the cluster. Does not make sense to me.</div><div></div><div></div><div>When you are creating a new cluster, you start on one server and add others using gluster peer probe OTHER_SERVER. Additional security isn't strictly required, because you are adding new, uninitialized glusterfs servers. (Unless you leave a freshly installed, uninitialized gluster running with public access - then you are in trouble).</div><div></div><div></div><div>Fortunately, glusterfs has added on ssl support, which is sadly almost completely undocumented. Presumably using ssl will make things better, although since it's undocumented it's hard to say for certain. What documentation exists is in this blog. Sadly, it only gives a sequence of steps.</div><div></div><div></div><div></div><div></div><div></div><div></div><div>Luckily this is not the case. Any of the commands that modify volumes return an exit status of 1 and fail with EPOLLERR as indicated by /var/log/glusterfs/cli.log. It appears you can only get info about the volumes for which that client has access.</div><div></div><div></div><div>Something I have been blissfully ignorant about is the glusterfs support that seems abandoned. There was gluster/gluster-kubernetes with the goal to easily create gluster clusters and heketi providing a restful api for that.</div><div></div><div></div><div>The other alternative I have considered is to mount a shared filesystem implemented with GlusterFS. GlusterFS is a bit easier to set up and manage than Ceph. A shared filesystem is a little easier to store the static files and can be accessed like any other local files on the server (while still providing high availability if using the glusterfs client to access the filesystem).</div><div></div><div></div><div>I want to use GlusterFS as a distributed Filestorage on FreeBSD 11.1Documentation is poor, so I followed some howtos on the net.I could create the glusterfs volume, but I have trouble to mount it on an other clients machine. Here is what I did so far:</div><div></div><div></div><div>As I normally do before I ask "silly" questions, I googled a lot for this.Played around with also installing glusterfs on the client (pkg install glusterfs), enabling it in the clients /etc/rc.conf, adding stuff for FUSE, but I could not bring it up to work. I feel quite annoyed, because I know it must be a very small thing I'm missing here!?</div><div></div><div></div><div>I have 3 hosts, 1 server with glusterfs 3.3.1 and 2 clients with glusterfs 6.10 and it is impossible for me to lift the mount on the client hosts. I rule out all kinds of problems since the server is operating and there are already other clients on the network with glusterfs 3.3.1 that lift the mount normally.</div><div></div><div></div><div>What I'd like to do, is have the data on that glusterfs mount be synced using rclone on google drive, such that, the worker containers have access to data that is in the google drive, when they make changes, those changes are synced to google drive, and also the reverse, changes made from other systems also sync to google drive, and are seen by the workers as well.</div><div></div><div></div><div> Josh_Harrison jprante Hello folks, I am trying out to setup for "ES data on Glusterfs" but facing issues like, "CorruptIndexException", and due to that ES index health trued RED, cache consistency issue and Openshift Logging ElasticSearch FSLocks when using GlusterFS storage backend</div><div></div><div>All this issue are facing with glusterfs 3.10.1 version.</div><div></div><div>Any pointers or forwards will be appreciated.</div><div></div><div></div><div>This flaw is based on a symlink (symbolic link) attack. Any glusterfs client with access to a gluster node, can mount gluster_shared_storage volume without authentication. This volume contains a file which is the target of a symlink from /etc/cron.d, and can be used to configure cron jobs for arbitrary users. After mounting gluster_shared_storage, the client can overwrite this file to schedule a cron job which would run as root. This would lead to privilege escalation on the gluster server.</div><div></div><div></div><div>The dangerous symlink is created when gluster snapshot scheduling is enabled. This requires a gluster administrator to run snap_scheduler.py init, using the script included in the glusterfs-server package. Note that if snapshot scheduling is later disabled, the symlink is not automatically removed so your system may remain vulnerable.</div><div></div><div></div><div>The vfs_glusterfs VFS module provides analternative, and superior way to access a Gluster filesystemfrom Samba for sharing. It does not require a Gluster FUSE mountbut directly accesses the GlusterFS daemon through its librarylibgfapi, thereby omitting the expensivekernel-userspace context switches and taking advantage of someof the more advanced features of GlusterFS.</div><div></div><div></div><div>Note that since vfs_glusterfs does notrequire a Gluster mount, the share path istreated differently than for other shares: It is interpreted asthe base path of the share relative to the gluster volume used.Because this is usually not at the same time a system path, in actdb cluster setup where ctdb manages Samba, you need to setCTDB_SAMBA_SKIP_SHARE_CHECK=yes in ctdb'sconfiguration file. Otherwise ctdb will not get healthy.</div><div></div><div></div><div>The write-behind translator is enabled by default on GlusterFS. The vfs_glusterfs plugin will check for the presence of the translator and refuse to connect if detected. Please disable the write-behind translator for the GlusterFS volume to allow the plugin to connect to the volume. The write-behind translator can easily be disabled via calling</div><div></div><div> df19127ead</div>
--- Synchronet 3.21a-Linux NewsLink 1.2