20 random bookmarks
Bookmarks and whatnot. Закладки и всякое.
Bookmarks and whatnot. Закладки и всякое.
Thoughts, tools, and libraries I use to make games
{dogsCount,plural,=0{No dogs}one{One dog is}other{# dogs are}} here.
“The longest-running vaporware story in the history of the computer industry”
Справочник энциклопедия грибов «ВикиГриб» 🍄 - каталог всех видов грибов растущих на территории СНГ (Россия, Украина, Белоруссия)
Good links.
A wiki about concatenative programming languages running a custom (looking good) wiki engine with a custom markup!
Author says Go is good because it's stuck in the 70s. Also tells a bit how the 70s were like. Mentions IBM 360, Oberon and whatnot.
Quickly design and test ActivityPub objects with mock servers of popular projects like Mastodon and Pixelfed.
Thanks, just in time.
Коллекция ссылок про архивирование веба.
One can use gzip to classify data.
Deep neural networks (DNNs) are often used for text classification due to their high accuracy. However, DNNs can be computationally intensive, requiring millions of parameters and large amounts of labeled data, which can make them expensive to use, to optimize, and to transfer to out-of-distribution (OOD) cases in practice. In this paper, we propose a non-parametric alternative to DNNs that’s easy, lightweight, and universal in text classification: a combination of a simple compressor like gzip with a k-nearest-neighbor classifier. Without any training parameters, our method achieves results that are competitive with non-pretrained deep learning methods on six in-distribution datasets.It even outperforms BERT on all five OOD datasets, including four low-resource languages. Our method also excels in the few-shot setting, where labeled data are too scarce to train DNNs effectively.
Our method is a simple, lightweight, and uni- versal alternative to DNNs. It’s simple because it doesn’t require any preprocessing or training. It’s lightweight in that it classifies without the need for parameters or GPU resources. It’s universal as com- pressors are data-type agnostic, and non-parametric methods do not bring underlying assumptions.
Without any pre-training or fine-tuning, our method outperforms both BERT and mBERT on all five datasets.
Questioned:
Красивые пиксельные штуки
Notes on Uxn.