RU We make your apps run faster!

The major problem of developing applications that use GPU for computations is their optimization. For example, when working with Tesla C2070 it is usually unclear how to get just 100 or 200 GFLOPS of its peak 300 GFLOPS performance or, even better, 400-500 GFLOPS.

The situation becomes even worse due to the fact that even a high-quality software with strong performance appears far from being optimal. After porting an application to a slightly different GPU, all improvements aimed to increase its performance become useless or even detrimental if the new GPU differs  in architecture from the old one. Futhermore, even when computations are carried out on the same
hardware but with dramatically different input data, some of the undertaken optimizations can over again become pointless.

The solution of the aforementioned problem consists in software autotunig that tailors the application to the target system automatically. This means that after self-learning stage taking place in runtime, an application adapts itself to the hardware it is running on thus boosting the performance and working around the nightmare of optimizing for GPGPU.

Most of such approaches are implemented in TTG Apptimizer toolkit the key feature of which is the ability to tailor an application to the 'hardware + input data' combination. At the same time, this software distributes the computations over all available CPUs and GPUs, chooses an appropriate version of computational kernel for each processing unit and optimizes an application in accordance with a preset policy (fast learning, high-quality optimization, support of as many devices as possible, etc.).

The testing accomplished by our company has demonstrated that this toolkit allows to increase the performance by 25-40 percent when an application runs on a single GPU and by 50-200% if an application uses CPU and several GPUs simultaneously. Technical documentation on TTG Apptimizer is available in this section. To get a free Lite version of our toolkit, please, send a message to