We do not need the test "contentLength < COMPRESSOR_BUFFER_SIZE" to know
if we compress the content or not.
This is a non sens, we WANT to compress the content if it is big.
The compr pointer points to the allocated memory. We must not modify
it value.
If we advance the pointer by two bytes each time we compress an answer
we will end to write in some random memory and segfault.
Now, we use a std::vector to correctly handle allocation
(and deallocation!) of the memory.
For ie browser, we need to remove the two first bytes.
If we make our buffer start two bytes after, we also need to reduce the
size of the buffer by two bytes. Else we will read and send two extra
junk bytes.
Fix#15
Ideally we should check if iconv is present to know if ctpp2 has been
build with iconv.
This may be a bit too complex for our present case. As we know our
cross-compilation environment. It is better to remove the use of iconv
everywhere for now.
If we are compiling a static binaries, all dependencies (including indirect
dependencies) must be present on the command line.
To have them, we have to add '--static' option to the pkg-config line.
Meson does this for us, but we must ask it to do it with the 'static'
argument.
Those options were added by the kiwix-build.py script.
But it's better to add them in the meson.build script to allow people
not using kiwix-build to compile static binaries either.
ctpp2-st is not a standard name, all other projects use the same base name
for dynamic and static libs.
Debian already patch the lib name in the ctpp2 package.
As we also patch it in kiwix-build, we can ignore ctpp2-st and always
try to link on ctpp2.
This is even necessary as ctpp2-st is not existing at all is those
usecases, so we cannot try to link with ctpp2-st when compiling statically.
We could fix, the ctpp2 lib search to search ctpp2 and ctpp2-st when
compiling statically but it seems to be a lot of work for nothing
as ctpp2-st is not used at all in our usecases.
We need to support as far as possible the meson version installed on
ubuntu 16.04 (LTS).
In meson 0.31.0, the find_library has moved as a method of the compiler.
If we are cross-compiling to windows, we need to also link with the
iconv library.
We do not check for the iconv library existance here. We assume that
if the ctpp2 library is present all its own dependencies are present also.
'max' is a size_t and 'blob()->size()-pos' is a uint64_t.
Depending of the compiler (version, options, ...) this is a error as
we don't know which template specialization we have to use.
- On unix, filenames are case sensitive and all include files are lowercase
- When crosscompiling to Windows, we use mingw32 and not msc.
So we should not try to include "stdint4win.h"
- Windows includes #define interface to struct.
As we use interface as variable name, we need to undef interface
The ninja command may relaunch meson if meson files have changed.
As we need a proper environment (PKG_CONFIG_PATH, PATH) to let meson
configure properly, we also need to pass the environment to ninja.
zimlib doesn't use github but gerrit to handle changes.
As this patches to review, there is no meson branche for now.
Use a personal fork for now. As soon as the meson scripts have been
integrated in openzim repository, we should revert this commit.
Ubuntu on 64 bits install lib in lib/x86_64-linux-gnu and meson correctly
detect this.
Thus it install libs (zimlib, kiwix) in this directory. However we
look for pkgconfig files in $INTALL_DIR/lib64. And so, the lib is not
found.
We could force meson to install in $INSTALL_DIR/lib64 all the time but
it is just better to follow the correct convention on Ubuntu.
Reuse the algorithm used in meson to correctly detect the libprefix,
use it and force all build script (autotools, cmake, meson) to install there.
In the same way, ninja may be called ninja-build depending of the distribution.
Or depending of how meson is installed, we may have to launch meson or meson.py.
So we detect the command to launch to try to be as most as possible portable.
This script download and compile all depedencies and subproject for
kiwix-tools.
Ideally it should be as simple as run the script with the install dir as
argument.
This script compile a dynamic or a static build of kiwix-tools (kiwix-serve)
This as been tested on Fedora.
Binary content do not need to be modified, so we don't need to copy it.
We can directly serve it from the internal zim (cluster) buffer.
The handle_content function now getArticleObjectByDecodedUrl instead of
getContentByDecodedUrl.
This is to get the mimetype of the article and copy the content only when
needed (getContentByDecodedUrl always copy the content).
Thus, handle_content is a bit more complex as it need to do some
manipulation previously made in getContentByDecodedUrl.
The main change is that if the content is binary, we serve the content
with a callback response who will get the content chunks directly from
the blob buffer.
Instead of having a big callback function doing almost everything to
handle a request, we split the code into several functions.
There are two new helper functions :
- build_response that create a response object with correct headers set.
- compress_content who compress the content if necessary.
All the different cases are handle by different functions :
- handle_suggest
- handle_skin
- handle_search
- handle_random
- handle_content
- handle_default
accesHandlerCallback now handle common stuff, delegate to the handle_*
functions everything else.
There is no special optimization made here. Only splitting code.