Building Coqui STT native client for Windows¶
Now we can build the native client of 🐸STT and deploy on Windows using the C# client, to do that we need to compile the
Table of Contents
Visual Studio 2019 Community v22.214.171.124
Visual Studio 2019 BuildTools v126.96.36.199
Inside the Visual Studio Installer enable
MS Build Tools and
VC++ 2019 v16.00 (v160) toolset for desktop.
If you want to enable CUDA support you need to follow the steps in the TensorFlow docs for building on Windows with CUDA.
We highly recommend sticking to the recommended versions of CUDA/cuDNN in order to avoid compilation errors caused by incompatible versions. We only test with the versions recommended by TensorFlow.
Getting the code¶
We need to clone
git clone https://github.com/coqui-ai/STT git submodule sync tensorflow/ git submodule update --init tensorflow/
Configuring the paths¶
There should already be a symbolic link, for this example let’s suppose that we cloned into
D:\cloned and now the structure looks like:
. ├── D:\ │ ├── cloned # Contains 🐸STT and tensorflow side by side │ │ └── STT # Root of the cloned 🐸STT │ │ ├── tensorflow # Root of the cloned coqui-ai/tensorflow └── ...
Change your path accordingly to your path structure, for the structure above we are going to use the following command if the symbolic link does not exists:
mklink /d "D:\cloned\STT\tensorflow\native_client" "D:\cloned\STT\native_client"
Adding environment variables¶
After you have installed the requirements there are few environment variables that we need to add to our
PATH variable of the system variables.
For MSYS2 we need to add
bin directory, if you installed in the default route the path that we need to add should looks like
C:\msys64\usr\bin. Now we can run
pacman -Syu pacman -Su pacman -S patch unzip
For BAZEL we need to add the path to the executable, make sure you rename the executable to
To check the version installed you can run:
python.exe path to the
If you run CUDA enabled
native_client we need to add the following to the
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin
Building the native_client¶
There’s one last command to run before building, you need to run the configure.py inside
tensorflow cloned directory.
At this point we are ready to start building the
native_client, go to
tensorflow sub-directory, following our examples should be
We will add AVX/AVX2 support in the command, please make sure that your CPU supports these instructions before adding the flags, if not you can remove them.
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libstt.so
GPU with CUDA¶
If you enabled CUDA in configure.py configuration command now you can add
--config=cuda to compile with CUDA support.
bazel build --workspace_status_command="bash native_client/bazel_workspace_status_cmd.sh" -c opt --config=cuda --copt=/arch:AVX --copt=/arch:AVX2 //native_client:libstt.so
Be patient, if you enabled AVX/AVX2 and CUDA it will take a long time. Finally you should see it stops and shows the path to the generated
Using the generated library¶
As for now we can only use the generated
libstt.so with the C# clients, go to native_client/dotnet/ in your STT directory and open the Visual Studio solution, then we need to build in debug or release mode, finally we just need to copy
libstt.so to the generated