All posts by Besir Kurtulmus

Trustless Machine Learning Contracts: Evaluating and Exchanging Machine Learning Models on the Ethereum Blockchain

Machine Learning algorithms are being developed and improved at an incredible rate, but are not necessarily getting more accessible to the broader community. That’s why today Algorithmia is announcing DanKu, a new blockchain-based protocol for evaluating and purchasing ML models on a public blockchain such as Ethereum. DanKu enables anyone to get access to high quality, objectively measured machine learning models. At Algorithmia, we believe that widespread access to algorithms and deployment solutions is going to be a fundamental building block of a balanced future for AI, and DanKu is a step towards that vision.

The DanKu protocol utilizes blockchain technology via smart contracts. The contract allows anyone to post a data set, an evaluation function, and a monetary reward for anyone who can provide the best trained machine learning model for the data. Participants train deep neural networks to model the data, and submit their trained networks to the blockchain. The blockchain executes these neural network models to evaluate submissions, and ensure that payment goes to the best model.

The contract allows for the creation of a decentralized and trustless marketplace for exchanging ML models. This gives ML practitioners an opportunity to monetize their skills directly. It also allows any participant or organization to solicit machine learning models from all over the world. This will incentivize the creation of better machine learning models, and make AI more accessible to companies and software agents. Anyone with a dataset, including software agents can create DanKu contracts.

We’re also launching the first DanKu competition for a machine learning problem. Read More…

Advanced Algorithm Design

We host more than 4000 algorithms for over 50k developers. Here is a list of best practices we’ve identified for designing advanced algorithms. We hope this can help you and your team. Read More…

Adding multilingual support to any algorithm: pre-translation in NLP

We often get asked about if we’re planning on adding any non-English NLP algorithms. As much as we would love to train NLP models on other languages, there aren’t many usable training datasets in these languages. And, due to the linguistic structure of these languages, training with pre-existing approaches doesn’t always give the best results.

Until better training sets can be generated, one passable solution is to translate the text to English before sending it to the algorithm. Read More…

Style Transfer with StyleThief

Style transfer is a term used for reimagining an image with the style of a given piece of art. Recently, various research groups have proposed different approaches to do style transfer. Generally speaking, there are trade-offs between these different techniques.

For example, in one of our previous spotlights we talked about DeepFilter, where you train a model based on a style, and stylize images almost instantaneously with that trained model. You would train for about a day, and later be able to stylize images rapidly. The biggest issue with this technique is that training wouldn’t always yield the best results. You would then need to train it multiple times, which could easily add up to a few days.

StyleThief works differently from DeepFilter. It takes a long time to train for every sample image, but is more robust and yields better stylized images. It is a trade-off between speed and quality. Read More…