forked from pool/glusterfs
Jan Engelhardt
a5a82a3cc1
OBS-URL: https://build.opensuse.org/package/show/filesystems/glusterfs?expand=0&rev=23
116 lines
5.3 KiB
Plaintext
116 lines
5.3 KiB
Plaintext
-------------------------------------------------------------------
|
||
Mon May 5 22:40:02 UTC 2014 - jengelh@inai.de
|
||
|
||
- Update to new upstream release 3.5.0
|
||
* AFR_CLI_enhancements: Improved logging with more clarity and
|
||
statistical information. It allows visibility into why a
|
||
self-heal process was initiated and which files are affected, for
|
||
example. Prior to this enhancement, clearly identifying
|
||
split-brain issues from the logs was often difficult, and there
|
||
was no facility to identify which files were affected by a split
|
||
brain issue automatically. Remediating split brain without quorum
|
||
will still require some manual effort, but with the tools
|
||
provided, this will become much simpler.
|
||
* Exposing Volume Capabilities: Provides client-side insight into
|
||
whether a volume is using the BD translator and, if so, which
|
||
capabilities are being utilized.
|
||
* File Snapshot: Provides a mechanism for snapshotting individual
|
||
files. The most prevalent use case for this feature will be to
|
||
snapshot running VMs, allowing for point-in-time capture. This
|
||
also allows a mechanism to revert VMs to a previous state
|
||
directly from Gluster, without needing to use external tools.
|
||
* GFID Access: A new method for accessing data directly by GFID.
|
||
With this method, the data can be directly consumed in changelog
|
||
translator, which is logging ‘gfid’ internally, very efficiently.
|
||
* On-Wire Compression + Decompression: Use of this feature reduces
|
||
the overall network overhead for Gluster operations from a
|
||
client.
|
||
* Prevent NFS restart on Volume change (Part 1): Previously, any
|
||
volume change (volume option, volume start, volume stop, volume
|
||
delete, brick add, etc.) would restart the NFS server, which led
|
||
to service disruptions. This feature allow modifying certain
|
||
NFS-based volume options without such interruptions occurring.
|
||
Part 1 is anything not requiring a graph change.
|
||
* Quota Scalability: Massively increase the amount of quota
|
||
configurations from a few hundred to 65536 per volume.
|
||
* readdir_ahead: Gluster now provides read-ahead support for
|
||
directories to improve sequential directory read performance.
|
||
* zerofill: Enhancement to allow zeroing out of VM disk images,
|
||
which is useful in first time provisioning or for overwriting an
|
||
existing disk.
|
||
* Brick Failure Detection: Detecting failures on the filesystem
|
||
that a brick uses makes it possible to handle errors that are
|
||
caused from outside of the Gluster environment.
|
||
* Disk encryption: Implement the previous work done in HekaFS into
|
||
Gluster. This allows a volume (or per-tenant part of a volume) to
|
||
be encrypted “at rest” on the server using keys only available on
|
||
the client. [Note: Only content of regular files is encrypted.
|
||
File names are not encrypted! Also, encryption does not work in
|
||
NFS mounts.]
|
||
* Geo-Replication Enhancement: Previously, the geo-replication
|
||
process, gsyncd, was a single point of failure as it only ran on
|
||
one node in the cluster. If the node running gsyncd failed, the
|
||
entire geo-replication process was offline until the issue was
|
||
addressed. In this latest incarnation, the improvement is
|
||
extended even further by foregoing use of xattrs to identify
|
||
change candidates and directly consuming from the volume
|
||
changelog, which will improve performance twofold: one, by
|
||
keeping a running list of only those files that may need to be
|
||
synced; and two, the changelog is maintained in memory, which
|
||
will allow near instant access to which data needs to be changed
|
||
and where by the gsync daemon.
|
||
|
||
-------------------------------------------------------------------
|
||
Thu Feb 28 21:58:02 UTC 2013 - jengelh@inai.de
|
||
|
||
- Update to new upstream release 3.4.0alpha (rpm: 3.4.0~qa9)
|
||
* automake-1.13 support
|
||
- Enable AIO support
|
||
|
||
-------------------------------------------------------------------
|
||
Tue Nov 27 11:28:36 UTC 2012 - jengelh@inai.de
|
||
|
||
- Use `glusterd -N` in glusterd.service to run in foreground
|
||
as required
|
||
|
||
-------------------------------------------------------------------
|
||
Tue Nov 27 10:59:15 UTC 2012 - cfarrell@suse.com
|
||
|
||
- license update: GPL-2.0 or LGPL-3.0+
|
||
|
||
-------------------------------------------------------------------
|
||
Fri Nov 9 21:47:11 UTC 2012 - jengelh@inai.de
|
||
|
||
- Update to new upstream release 3.4.0qa2
|
||
* No changelog provided by upstream
|
||
- Remove glusterfs-init.diff, merged upstream
|
||
- Provide systemd service file
|
||
|
||
-------------------------------------------------------------------
|
||
Wed Oct 31 12:19:47 UTC 2012 - jengelh@inai.de
|
||
|
||
- Update to new upstream release 3.3.1
|
||
* mount.glusterfs: Add support for {attribute,entry}-timeout options
|
||
* cli: Proper xml output for "gluster peer status"
|
||
* self-heald: Fix inode leak
|
||
* storage/posix: implement native linux AIO support
|
||
|
||
-------------------------------------------------------------------
|
||
Mon Sep 24 03:45:09 UTC 2012 - jengelh@inai.de
|
||
|
||
- Update to new upstream release 3.3.0
|
||
* New: Unified File & Object access
|
||
* New: Hadoop hooks - HDFS compatibility layer
|
||
* New volume type: Repstr - replicated + striped (+ distributed)
|
||
volumes
|
||
|
||
-------------------------------------------------------------------
|
||
Fri Dec 2 15:43:43 UTC 2011 - coolo@suse.com
|
||
|
||
- add automake as buildrequire to avoid implicit dependency
|
||
|
||
-------------------------------------------------------------------
|
||
Wed Oct 5 22:17:35 UTC 2011 - jengelh@medozas.de
|
||
|
||
- Initial package for build.opensuse.org
|