Merge pull request #9 from 1ace/patch-1

cmake: add installation target
diff --git a/README.md b/README.md
index c9861d4..ad03894 100644
--- a/README.md
+++ b/README.md
@@ -23,6 +23,23 @@
 
 The command line tool used to create, validate, and transcode/unpack .basis files is named "basisu". Run basisu without any parameters for help. Note this tool uses the reference encoder.
 
+To build basisu:
+
+```
+cmake CMakeLists.txt
+make
+```
+
+On macOS it's currently necessary to build with gcc instead of the system default clang.
+
+```
+#only if you don't have gcc installed
+brew install gcc
+
+cmake -DCMAKE_CXX_COMPILER=g++-9 CMakeLists.txt
+make
+```
+
 To compress a sRGB image to .basis:
 
 `basisu x.png`
@@ -31,7 +48,7 @@
 
 To add automatically generated mipmaps to the .basis file, at a higher than default quality level (which ranges from [1,255]):
 
-`basisu -mipmap -q 190 x.png`
+`basisu -mipmap -mip_srgb -q 190 x.png`
 
 There are several mipmap options that allow you to change the filter kernel, the smallest mipmap dimension, etc. The tool also supports generating cubemap files, 2D/cubemap texture arrays, etc.
 
@@ -73,6 +90,39 @@
 
 If you are doing rate distortion comparisons vs. other similar systems, be sure to experiment with increasing the endpoint RDO threshold (-endpoint_rdo_thresh X). This setting controls how aggressively the compressor's backend will combine together nearby blocks so they use the same block endpoint codebook vectors, for better coding efficiency. X defaults to a modest 1.5, which means the backend is allowed to increase the overall color distance by 1.5x while searching for merge candidates. The higher this setting, the better the compression, with the tradeoff of more block artifacts. Settings up to ~2.25 can work well, and make the codec more competitive.
 
+### More detailed examples
+
+`basisu x.png`\
+Compress sRGB image x.png to x.basis using default settings (multiple filenames OK)
+
+`basisu x.basis`\
+Unpack x.basis to PNG/KTX files (multiple filenames OK)
+
+`basisu -file x.png -mipmap -mip_srgb -y_flip`\
+Compress a mipmapped x.basis file from an sRGB image named x.png, Y flip each source image
+
+`basisu -validate -file x.basis`\
+Validate x.basis (check header, check file CRC's, attempt to transcode all slices)
+
+`basisu -unpack -file x.basis`\
+Validates, transcodes and unpacks x.basis to mipmapped .KTX and RGB/A .PNG files (transcodes to all supported GPU texture formats)
+
+`basisu -q 255 -file x.png -mipmap -mip_srgb -debug -stats`\
+Compress sRGB x.png to x.basis at quality level 255 with compressor debug output/statistics
+
+`basisu -linear -max_endpoints 16128 -max_selectors 16128 -file x.png`\
+Compress non-sRGB x.png to x.basis using the largest supported manually specified codebook sizes
+
+`basisu -linear -global_sel_pal -no_hybrid_sel_cb -file x.png`\
+Compress a non-sRGB image, use virtual selector codebooks for improved compression (but slower encoding)
+
+`basisu -linear -global_sel_pal -file x.png`\
+Compress a non-sRGB image, use hybrid selector codebooks for slightly improved compression (but slower encoding)
+
+`basisu -tex_type video -framerate 20 -multifile_printf "x%02u.png" -multifile_first 1 -multifile_count 20`\
+Compress a 20 sRGB source image video sequence (x01.png, x02.png, x03.png, etc.) to x01.basis
+
+
 ### WebGL test 
 
 The "WebGL" directory contains two very simple WebGL demos that use the transcoder compiled to wasm with [emscripten](https://emscripten.org/). See more details [here](webgl/README.md).
@@ -188,7 +238,7 @@
 
 BC5/3DC: This format has two BC4 blocks, and is usually used for XY (red/green) tangent space normal maps. The first block (output red/or X) will have the R channel of the color slice (which we assume is actually grayscale, but doesn't have to be), and the output green channel (or Y) will have all-255 blocks. We could support converting the first two color components of the color ETC1S texture slice to BC5, but doing so doesn't seem to have any practical benefits (just use BC1 or BC7). Alternately we could support allowing the user to select a source channel other than red.
 
-Note that you can directly control exactly how transcoding works at the block level by calling a lower level API, basisu_transcoder::transcode_slice(). The higher level API (transcode_image_level) uses this low-level API internally. find_slice() and get_file_info() return all the slice information you would need to call this lower level API. I would study transcode_image_level()'s implementation before using the slice API to get familar with it. The slice API was written first.
+Note that you can directly control exactly how transcoding works at the block level by calling a lower level API, basisu_transcoder::transcode_slice(). The higher level API (transcode_image_level) uses this low-level API internally. find_slice() and get_file_info() return all the slice information you would need to call this lower level API. I would study transcode_image_level()'s implementation before using the slice API to get familiar with it. The slice API was written first.
 
 2. To get uncompressed 16/24/32-bpp pixel data from a slice, the best format to transcode to is ETC1. Then unpack and convert the resulting ETC1S block data to pixels (each block will be 4x4 pixels). Internally, everything is actually just ETC1S in the baseline format. We will be adding new methods that support decompressing to a few uncompressed pixel formats as some point.