summaryrefslogtreecommitdiff
path: root/thirdparty/zstd/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'thirdparty/zstd/README.md')
-rw-r--r--thirdparty/zstd/README.md19
1 files changed, 13 insertions, 6 deletions
diff --git a/thirdparty/zstd/README.md b/thirdparty/zstd/README.md
index 7caee5fd3f..f37be4542c 100644
--- a/thirdparty/zstd/README.md
+++ b/thirdparty/zstd/README.md
@@ -6,10 +6,17 @@ and a command line utility producing and decoding `.zst` and `.gz` files.
For other programming languages,
you can consult a list of known ports on [Zstandard homepage](http://www.zstd.net/#other-languages).
-|Branch |Status |
-|------------|---------|
-|master | [![Build Status](https://travis-ci.org/facebook/zstd.svg?branch=master)](https://travis-ci.org/facebook/zstd) |
-|dev | [![Build Status](https://travis-ci.org/facebook/zstd.svg?branch=dev)](https://travis-ci.org/facebook/zstd) |
+| dev branch status |
+|-------------------|
+| [![Build Status][travisDevBadge]][travisLink] [![Build status][AppveyorDevBadge]][AppveyorLink] [![Build status][CircleDevBadge]][CircleLink]
+
+[travisDevBadge]: https://travis-ci.org/facebook/zstd.svg?branch=dev "Continuous Integration test suite"
+[travisLink]: https://travis-ci.org/facebook/zstd
+[AppveyorDevBadge]: https://ci.appveyor.com/api/projects/status/xt38wbdxjk5mrbem/branch/dev?svg=true "Windows test suite"
+[AppveyorLink]: https://ci.appveyor.com/project/YannCollet/zstd-p0yf0
+[CircleDevBadge]: https://circleci.com/gh/facebook/zstd/tree/dev.svg?style=shield "Short test suite"
+[CircleLink]: https://circleci.com/gh/facebook/zstd
+
As a reference, several fast compression algorithms were tested and compared
on a server running Linux Debian (`Linux version 4.8.0-1-amd64`),
@@ -60,11 +67,11 @@ Previous charts provide results applicable to typical file and stream scenarios
The smaller the amount of data to compress, the more difficult it is to compress. This problem is common to all compression algorithms, and reason is, compression algorithms learn from past data how to compress future data. But at the beginning of a new data set, there is no "past" to build upon.
To solve this situation, Zstd offers a __training mode__, which can be used to tune the algorithm for a selected type of data.
-Training Zstandard is achieved by provide it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression.
+Training Zstandard is achieved by providing it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression.
Using this dictionary, the compression ratio achievable on small data improves dramatically.
The following example uses the `github-users` [sample set](https://github.com/facebook/zstd/releases/tag/v1.1.3), created from [github public API](https://developer.github.com/v3/users/#get-all-users).
-It consists of roughly 10K records weighting about 1KB each.
+It consists of roughly 10K records weighing about 1KB each.
Compression Ratio | Compression Speed | Decompression Speed
------------------|-------------------|--------------------