JuiceFS 1.1 Beta 2: Simplifying Large-Scale Cluster Management with Gluster
JuiceFS 1.1 Beta 2 is released, featuring significant enhancements. Notably, this version introduces support for Gluster, an open-source distributed storage solution, as an object storage option for JuiceFS. This helps streamline the management of large-scale clusters, providing users with a seamless and optimized experience.
In this post, we’ll explore two key features of JuiceFS 1.1 Beta 2, as well as the bug fixes included in this release. For the full list of our improvements, you can check out our release notes.
Feature 1: The newly added support for Gluster
Gluster, or GlusterFS, is an open-source software-defined distributed storage solution that can handle petabyte-scale data within a single cluster. It was initially released in 2005 and is maintained primarily by Red Hat, with a substantial global user base.
Why we add support for Gluster in this release
To address the difficulties with scaling large clusters and simplify operations and management, we’ve introduced support for Gluster in JuiceFS 1.1 Beta 2.
JuiceFS doesn’t directly store data but relies on integration with other object storage systems for underlying data management. Typically, users who need to set up their own object storage have commonly chosen MinIO or Ceph. However, both options have their limitations. For example:
- MinIO may face challenges when scaling large clusters.
- Ceph often involves high operational complexities in handling cluster failures.
Why Gluster+JuiceFS makes large-scale cluster management easier
Gluster+JuiceFS enables easier management of large-scale clusters due to the advantages of Gluster itself and the ability of JuiceFS to compensate for Gluster’s shortcomings.
Gluster’s advantages and disadvantages
- Gluster has a simple and fully decentralized architecture. It uses the local file system like XFS on each node for data storage.
- Thanks to its simple architecture, Gluster can easily support clusters at the petabyte scale.
- Gluster is known for its user-friendly experience in terms of usage and management.
However, Gluster has a few disadvantages:
- Slower directory listing for large directories: Without a centralized metadata management service, Gluster clients need to combine results from multiple service nodes when listing directories. This operation can be time-consuming for large directories.
- Weak replica consistency: Data in Gluster typically requires redundancy, such as maintaining three replicas. In corner cases, inconsistencies may arise among the replicas after multiple file modifications.
- Performance impact of file renaming: Gluster uses the hash result of the file name to determine the storage node. When a file is renamed, its location may change. Gluster creates a link at the expected location, pointing to the actual location of the file. This process can affect subsequent file retrieval performance to some extent.
JuiceFS overcomes Gluster’s limitations
When you use Gluster only as the data storage engine for JuiceFS, the aforementioned Gluster disadvantages would not be encountered, because:
- In daily usage, JuiceFS uses simple object storage interfaces, such as basic
- JuiceFS only writes data objects once, without overwriting or appending.
Feature 2: Automatically blocking older versions from connecting
To prevent unintentional mixing of multiple client versions, JuiceFS automatically applies restrictions when creating new file systems, disallowing connections from clients older than v1.1.
In JuiceFS 1.1 Beta, we introduced directory quota management and usage statistics. These are enabled by default for newly created file systems. However, earlier versions before v1.1 lack these features. This means that they are not subject to directory quota limitations and may result in significant usage statistic discrepancies.
For existing file systems, directory quota and usage statistics are not enabled by default. Thus, this restriction does not apply. However, if you want to use directory quotas, we recommend manually adding version restrictions before setting quotas, as follows:
$ juicefs config $META-URL --min-client-version 1.1.0-A
In this release, we fixed several bugs. For example:
- Fixed an issue that certain commands, such as
rmr, might not function correctly in container environments.
- Fixed an issue that the
infocommand could fail when viewing very large files.
- Fixed a problem that files could still be successfully created in deleted directories under specific scenarios.
- Fixed an issue that the
statscommand occasionally failed to display object storage metrics.
For a complete list of bugs, please visit the JuiceFS 1.1 Beta 2 release page on GitHub.
Give it a try
If you have any questions or would like to share your thoughts, feel free to join our discussions on GitHub and community on Slack. We greatly appreciate the invaluable help, feedback, and support we receive from our users and community.