A few decades ago, the only well-known way to deliver something to a server, to make it accessible over the internet, was moving files via FTP in Total Commander, FileZilla or FAR Manager, manually copying files and folders from the left pane to the right one. The more advanced among us preferred standard UNIX tools like scp or rsync instead, but the process was essentially the same.
...
The quiet revolution happened in 2000. Not on Windows Server, and not yet on Linux — but on FreeBSD, a UNIX-based operating system that was the default choice for IT professionals long before Linux dominated the space.
FreeBSD is worth a brief aside here, because it differs from Linux in a fundamental way. Linux is a kernel. What most people call "Linux" is actually that kernel combined with a GNU userland, a package ecosystem, and a set of choices that vary from distro to distro — Ubuntu, Fedora, and Arch are all running the same kernel but are meaningfully different systems underneath.
FreeBSD ships as a complete, coherent OS — kernel, userland, base tools, and libraries all developed together, versioned together, and tested together as a single unit. That coherence matters. It's part of why FreeBSD solutions tend to be cleaner and why the base system behaves consistently across installations.
The solution FreeBSD built on top of that coherent foundation was called jails. Announced by Poul-Henning Kamp and Robert Watson and shipped as a native kernel feature in FreeBSD 4.0 in March 2000, jails took the chroot idea and completed it — adding full network isolation, process isolation, and proper security boundaries.
...
Trisha Gee
Observability as the key to performance tuning Software Delivery
The golden rule of application performance tuning: measure, don’t guess. Yet when it comes to developer productivity, too many teams still guess. Builds are slow, tests are flaky, CI feels overloaded—and the default response is to throw hardware at the problem or hope it goes away.
In this talk, we’ll apply the performance engineering mindset to developer experience, showing how observability data from Develocity can profile builds and tests just like applications. By measuring and optimizing build and test performance, teams directly improve the DORA metrics that matter: shorter lead time for changes, lower change failure rates, faster recovery, and higher deployment frequency.
Developer productivity is a performance problem. If you want faster delivery and happier developers, the path is the same as for applications in production: measure first, then optimize.
What happens if we can't make another CPU...ever?
What fails first? How long would datacenters last? Does the Internet start to fracture?
Of course, it's a hypothetical thought experiment. But it's interesting to think what chips will stand the test of time, and which might fail sooner than you think!
Here are the other channels I mentioned, take a look:
@jeriellsworth made microchips at home, and is an excellent engineer + teacher, go check her out!
/ @jeriellsworth
@KazeN64 is doing wild optimizations with N64 hardware:
/ @kazen64
L'ancien CTO d'Oculus réagit à un «exercice de pensée» sur une «apocalypse des CPU»
John Carmack, une célébrité dans le monde du développement de jeux vidéo et de la technologie, s'est souvent présenté comme un défenseur de l'optimisation logicielle. Ancien directeur technique d'Oculus VR et cofondateur d'id Software (une entreprise américaine de jeux vidéo qu'il a quitté en 2013), Carmack a redéfini ce que nous attendons d'un moteur de jeu et d'une expérience immersive. Récemment, il a lancé un débat audacieux qui pourrait bien bouleverser notre vision de l’évolution technologique : et si, en réalité, nous n’étions pas si dépendants du matériel dernier cri ?
Réagissant à un « exercice de pensée » publié sur le réseau social X (anciennement Twitter), qui parlait d’une « apocalypse des CPU », Carmack a exprimé son point de vue selon lequel le véritable problème n’est pas un manque de puissance des processeurs modernes, mais plutôt l'inefficacité des logiciels actuels. Si l’optimisation logicielle était traitée comme une priorité, il soutient que beaucoup plus de systèmes dans le monde pourraient fonctionner efficacement sur du matériel plus ancien, et ce, sans sacrifier la performance. En d’autres termes, les pressions du marché pousseraient les entreprises à améliorer drastiquement l'efficacité des logiciels si l'innovation matérielle s’arrêtait.
Cleanup, Speedup, Levelup.
One package at a time.
e18e (Ecosystem Performance) is an initiative to connect the folks and projects working to improve the performance of JS packages.
We'd also like to provide visibility to the efforts of countless open source developers working to cleanup, levelup, and speedup our dependencies.
We invite you to get involved in the different projects linked from these pages, and to connect with other like-minded folks.