Identity

240304

This is a great concept I found on Hackernews. The idea is for each section of your product, you have a root account and that account can have multiple identities.

Identities can be used to different areas of concern. This allows for safeguarding of functionality or data by identity.

Here's the original quote

I think this is definitely the way to go for any modern login system that plans to support multiple external OAuth providers in addition to its own internal email/password identity.

Decoupling the account from the identity allows users to log in to the same account with any identity provider, and have access to all the private resources associated with any external identity within that same account. Think of a storage service that aggregates user data from Dropbox, Google Drive, and OneDrive, and allows you to log in with any one of the 3 OAuth providers to access data from all 3. With a naive model that couples identity to account, this kind of identity federation would be much more difficult to support and extend to additional identity providers.

For those looking to implement support for identity federation in a microservice environment, I'd highly recommend taking a look at Dex from CoreOS: https://github.com/coreos/dex

It provides a consistent OpenID Connect based interface to multiple identity providers and identity management systems, for your backend to interact with (i.e. Dex talks to identity providers using whatever protocols they support, and your backend talks to Dex using OpenID Connect).

Autechre Interview

24.01.18

I love reading about Autechre and their process. Often times it's hard for me to find a particular piece I found in the past so I'll start posting them here. This particular interview can be found by on this website.

Sean Both interview taken by Peter Hollo - March 2005

Sean Booth sounds like he’s just rushed in when he picks up the phone – and for good reason; he has. He spent the night housesitting for a friend whose place had been broken into while they were away – a disturbing occurrence because it’s so rare in a country town in Suffolk (north-west of London) where Booth lives.

‘I grew up in Middleton, which is part urban, but it backs onto local farms and such, so it’s a bit of both up there – old-school working class, I suppose. I wouldn’t say I was a city gent; I mean I used to spend a lot of time in Manchester when I was growing up because it was only about seven miles away.’

He then lived in Sheffield (home of Warp Records) for some years, but when Autechre’s other half, Rob Brown, moved to London he decided to find himself somewhere closer there. ‘Rob’s lived in London since 1998,’ he explains. ‘But in terms of working together, when we started out we used to live about eight or nine miles apart, and usually by the time we’d finished working it was too late to get a bus home, so I’d just walk. I’m kinda used to having a bit of a distance, but these days we’re only an hour-and-a-half away from each other, so it’s kinda like me living in London, except that it’s ... not London!

Having that distance between them isn’t a big deal. ‘We’ve always worked separately,’ says Booth. ‘The Autechre thing is kinda like a crew name – sometimes I do tunes and Rob really likes them, and they come out as Autechre. Sometimes Rob does tunes and I really like them. Sometimes we do a bit and then we hand each other the bit, or we’re in the room together, and we hand it back and forth. There’s never a set way that we work together – we do it every single way we can. We’re both interdisciplinary; there are no set areas of expertise. We do have slightly different aesthetic tendencies, and we’re quite good at capitalising on those differences, but it’s a completely adaptive process. It could just be: turn on one piece of equipment, hit a pad and go on with that sound for a while, or it can be sitting down for ages building something to use.

‘In Sheffield I was living in a warehouse, and it was like, you’d get up at 11am, look out your window – all bleary because you’d been caning it or whatever –and there’s just loads of people going about their business. Look out the back and there’s this factory, milling, constantly – all you can hear is a bandsaw, just going for it. For four years, it starts to grind you down. It’s irritating basically, constantly seeing adverts for products, and people going about what basically seems like quite boring business to you because you’re trying to reach some kind of creative spot.

‘I find it loads easier to write tracks out here because there’s so much space, and so little contemporary culture – I look up and all I see is farms and trees and the occasional kid wearing a baseball cap. I’ve never drawn all that much from contemporary culture – I’ve always ignored it, or tried to. I like to have windows open and like to be able to see what’s going on in the outside world; I don’t like to have my blinds down all day.

Autechre’s methodology encompasses everything from analogue acid to digital crispness, generative techniques to intricate programming. It can be hard to pin down the sources in their music, but 2001’s Confield certainly brought the algorithmically-generated structures to the fore. ‘The generative stuff – some of it’s process-based; a track like ‘VI Scose Poise’, for example, is completely process-based. That was a process made in Max [a program for creating sound-generating and -processing objects from the ground up] as a kind of sequencer, spitting out MIDI data. It was built just to run. It had various counters that would instigate various changes in the way the patch. We’d hit “Start” and listen to it, and if it did something wrong we’d change whatever variable it was that was making it go wrong, then run the process again. This was completely hands-off.

‘Then a track like ‘Uviol’ was made using a sequencer we’d built that changed what it was generating according to parameters we set with faders, so we’d spend a lot of time building it very soberly, and then we’d spend a lot of time very un-soberly playing it. A lot of the tracks on Confield are like that – they’re basically made in real-time using sequencers where we’d spent a lot of time making this thing that would generate music according to a few set parameters, and then we’d mess around with the parameters in order to make the music later, when we were in a different frame of mind.

‘Draft 7.30 is very different, because it’s almost 100 per cent composed, with very little playing or real-time input or anything. Untilted is different again, it’s basically loads of different sequences all running together. We’ve used so many hardware devices this year compared to Draft 7.30 – on Confield there are a few hardware bits and pieces, a few analogue sequences being used there as well. On Untilted, it’s basically everything – bits of drum machines, old MIDI sequencers, old analogue sequencers, MPCs, basically the whole gamut of equipment we’ve had around us for ages, but used in slightly different combinations – in some ways more traditionally, in some ways less so.’

Autechre seem to have gotten excited about going back to these roots after intensive use of computers and algorithms. ‘The thing about a lot of analogue kit is that you haven’t got that opportunity for review, and you can basically sit there and drift off into another world – just get on with doing the tune – and it’s the same with a lot of MIDI sequencers. For me, a lot of interfaces that don’t give you a screen to look at – don’t give you a time-line to deal with – are more conducive to making music that’s well-paced. Most of our best work has been made on non-timeline sequencers. We still use timeline, especially for editing audio, but for working with MIDI it can be a bit stagnating. I don’t tend to use the computer a lot these days.

‘We do play keyboards sometimes – for beats and stuff as well. We have pads in here, keys, loads of MIDI controllers – basically our studio’s just a massive interface, tables covered in input devices. I really like physical interfaces; when we first bought the Nord Lead, it was the interface that did it for me. The storability, and the fact that it didn’t quite sound analogue, just didn’t come into it. The interface was so amazing; I could get so much done in such a short time, compared to any other virtual analogue synth around at the time – and it also sounds amazing, for what it is. I just love touching stuff and listening to it; I don’t like mouse control, controlling knobs and faders on a screen. I can still write stuff just inputting data, but I quite like being able to play it.

‘Sometimes I’ll just play the beats, and sometimes it’ll be mad editing; sometimes a bit of both, or it’ll be a process that’s then been edited into something that sounds musical. A lot of the electronic music I hear these days seems to be people who only know two or three ways of doing things – they don’t tend to vary their method very much. They’re over-commodifying themselves in a way, like they need to have a big trademark on everything they’re doing. It’s very habit-based, and the kind of thing I try to shy away from, I tend to shy away from anyway.’

It’s hard to get Booth to talk about whether Autechre try to communicate anything with their music or whether they even think about the listener when making their music. For a member of a duo whose music has an immense emotional impact on many of their fans, Booth is reluctant to impute any emotional content, or so it seems. He is, however, a fanatic about sounds as sounds.

‘A lot of our music is sample-based. The samples might not be immediately obvious, but that’s the way we like it really. I’m into physical modelling – everybody is these days – but if I’m working with models I prefer to do it in non-realtime situations, or using devices that have been specifically geared around giving you very little access to the parameters necessary to control the model. It might sound counter-intuitive but it makes sense in terms of writing music. It really depends on what’s available at the time. I’m really into modelling just as a science, so I can do it on a Nord and a couple of effects units; I can make samples that sound like breakbeats. Sometimes we’ll sample sounds that sound like they’ve been synthesised, because they’re so bizarre, and yet they’re natural.

‘I don’t know that we’ve ever considered ourselves to be sample-based or not. I like the way all the sounds sit together. There are a lot of samples on Untilted – some of them obvious (it depends on your is) and some of them unobvious, regardless of your history, because of what we’ve done to them. For Confield we used loads of drum machines and analogue kit on there, but that’s the thing: because of people’s perception, they kind of just stare past it. ‘They’re using a DMX on there? It can’t be a DMX because the beats are going all over the shop!’ Well, they’re doing that because it’s plugged into this delay that’s being re-triggered by its own output, and the delay’s from about 1983 too.

‘I remember being in a studio years ago. We’d met Daz [Darrel Fitton aka Bola] when he was working in a music shop, and he’d let us use some of his equipment. We were messing around with this Ensoniq keyboard that had this sound on there that could’ve been a piano through a chorus, but it wasn’t really – it was really obscenely bent up. As I was messing around with it, this kid came upstairs and was going, “What you doing there?” I was like, “I dunno, I’m really feeling this sound for some reason,” and I’m laughing ‘cause it was a preset, and he was like “Oh, what, chorused piano?” And I remember thinking, “It’s not just chorused piano, it’s fucking weird,” but the fact he’d identified what it was, in literal terms, meant that I just had to accept his description of it. So many musicians I meet these days are like that – you know, so happy to have tagged something it is that you’ve done, or somebody’s done, in a track: “He’s just compressed his kick drum.” And you’re going, “He’s not just done that; I mean what compressor is he using? That sounds fucking weird, have you heard the attack time on that?” There’s more anal things to be said about it sometimes.

‘A lot of the time it’s because we don’t advertise our methods very much. When we do they’re really transparent, but often you don’t really realise what the source is of what you’re listening to – that’s not the point of what we’re doing. We’re trying to just make things be what they are. It’s like if you were to take a little picture of a mountain that you had embroidered, and repeat it twenty times, it wouldn’t be a picture of a mountain repeated twenty times – it’d be this weird pattern. That means nothing … but in a way, maybe it means everything. If Autechre’s music is about anything, it’s about pushing the boundaries, making the familiar unfamiliar, and maybe repeated embroidered mountains is the perfect metaphor.

‘I mean, context – it’s one of those weird things. I’ve never understood how people hear what we do. It’s like chucking rocks in a pool, looking at reactions to what we do – it’s strange. Some people say, “It’s really great,” and some say, “I fucking hate this, what’s all the fuss about?” Well it’s like “fuss” ... at least someone’s making a fuss.’

GameDSL

23.12.24

I've been sick the past few days. Always seems to happen during summer or the holidays. As such, I've had a bit of spare time to give an initial shot at a compiler.

Here's the repo.

The goal is a simplified language that forces good data driven practices.

23.08.29

BeetleLang is going well. Today I added in comments for function definitions, cleaned up some unnecessary values and determined the next steps.

One thing I've been doing lately is checking in a failed unit test for what I'll be working on next. This makes it easy to switch contexts back to what I was doing previously. I would typically not suggest doing it on a main branch, or checking it into production, but for feature work it's been working great.

The other major help on this project is heavy usage of Enums. This allows one to encode various states into a single piece of data. Tooling for Rust allows you to easily match on it, providing compile time guaranteed handling of various states. Elixir has similiar capabilities with pattern matching, but unfortunately is not statically typed. While Dialyzer helps, it isn't a replacement for a truly statically typed language. The tradeoff is the ability to quickly iterate in a REPL, so decide which is most important to you when choosing a language for a project.

BeetleLang

23.08.26

Like always, I enjoy compiler design. My newest attempt is BeetleLang.

This is a language that is intended to allow easy scripting integration into C++ projects. It does this by targeting Lua. I've added JavaScript as an alternative target as well. My intent with two targets is to minimize the chance that target languages bleed into the source language.

Ideally it will have a syntax inspired by Haskell and Elixir, with the benefits of a statically checked language

Instead of working on the syntax first, I've started with the backend. This enables me to iterate quickly and output code in my target languages. I'm taking a loose approach where I don't do any sort of validation on the backend. This will instead be done in a different module to ensure I only need to work on one slice at a time. Another benefit is if I get the backend and translation right, I could reuse it with a different front end. That is out of scope for now.

Next steps are enhancing function calls and function definitions.

Company wide updates

23.08.22

This is from an article on HackerNews about Why KPIs are destroying businesses. While I don't know how applicable it is, it is something I would love to try out. Rather than abstract statistics, it talks about using screenshots and what the team delivered each week. It seems quite ideal.

This post by hliyan sums it up:

In the last engineering team I managed (by far the most successful one up to date), we discarded most velocity measures in favour of a simple solution: every Friday, each team sends a brief "here's what we delivered this week" email to the whole company. It contains some screenshots and instructions like "agents can now update the foobar rate from their mobile app without having to call in". After a month or two of these going out like clockwork, it gave both management and business stakeholders a level of comfort that no amount of KPI dashboards ever could. KPIs compress away details that stakeholders need to form an understanding. A full blast from the task tracker is overkill. This solution worked for us. But of course, required a solid feature flag practice so that half-built features (story slices) could be released weekly for testing without affecting production. That said, we did maintain a quarter-level sum-of-t-shirt-size based KPI.

Build Fast

23.08.12

From my perspective - speed is about 2 things: more smaller iterations, confirming you're working toward a desired outcome. Over my career, I've been surprised multiple times when I presented early draft to a stakeholder and they said, "oh, that's great, I've got what I need now"...and this was maybe 1/3 of the effort I was planning for.

The way I see it, if the problem is important - any early solution should provide some relief. If some initial relief isn't wanted, the problem is probably not important.

Along these lines, in my work with stuck startup founders, I often ask, "if we were going to demo a prototype to a customer (in 2 days | at the end of the week), what would it be?"

- garrickvanburen

Runtime

23.07.24

Lately I've been wanting to do a new project. I love scripting languages and compilers, but they're much harder to get off the ground than you would think. The issue is always time. So why not do something in that vein but I can reuse?

Writing your own language is tricky and these days more and more require excellent tooling integration. For me the thorn has been a language server. As a developer, I like my tooling. If I want to write a language, it needs good tooling (or any at the minimum). I have no desire to build that out right now, so I'll switch to something else.

Why not pick an existing language? Implement a new runtime for it? That sounds like fun.

Some time passes...

After viewing some HackerNews posts, I decided to make a trivial example based off of some existing repos. The credit is in the comments.

#ifndef SOME_HEADER_H #define SOME_HEADER_H #pragma once #ifdef __cplusplus extern "C" { #endif #include #include typedef struct cpu { int32_t registers[32]; uint32_t pc; uint32_t sp; uint32_t *ram; } cpu; typedef struct { void (*opcode)(cpu *c, const short args[]); char num_args; } instruction; void add(cpu *c, const short args[]) { c->registers[args[2]] = c->registers[args[0]] + c->registers[args[1]]; } void run_vm() { static const instruction i_add = {&add, 3}; // https://github.com/gravypod/computer/blob/master/isa/main.c#L89-L120 // https://news.ycombinator.com/item?id=15607509 static const instruction *opcodes[] = { NULL, // 0000 0000 &i_add, // 0000 0001(from, value, to) // &cmp_instruction, // 0000 0002(left register, right register, to) // &jmp_instruction, // 0000 0003(register with to location) // &jmpc_instruction, // 0000 0004(register with to location, condition register) // &mov_instruction, // 0000 0005(from, to) // &set_instruction // 0000 0006(value, to) }; cpu c = {{0}}; c.registers[0] = 1; c.registers[1] = 2; const short opcode = 1; const short args[] = {0, 1, 2}; opcodes[opcode]->opcode(&c, args); printf("%d\n", c.registers[2]); } #ifdef __cplusplus } #endif #endif

Modulating your modulators

I don't think this is about 'tricks' really - just one simple concept applied over and over again; put your modulating signal through a VCA and then do something to that VCA.

Let's say, as an example, that you're modulating a filter with an LFO. This will give you a wobble in the timbre of each note - but it will always be the same, so you run the LFO through a VCA in order to 'modulate the modulator'.

So... what can you do with that VCA?

if you apply another LFO to the VCA the amount of wobble will change periodically. For something less predictable you could use a chaotic signal, like an NLC Sloth instead.

If you apply an envelope to the VCA you could set it up so that the wobble increases as each note plays or you could invert the signal and decrease the wobble as the note plays. With a longer envelope, especially an ADSR, you could control the amount of wobble over a longer period of time, as part of the composition.

If you hit the VCA with a triggered random generator or sample and hold you'll get a different amount of wobble for each note.

For more precise control you could use a sequencer on the VCA instead. Then you can dial in exactly how much wobble you want for each note.

If you use the oscillator's pitch cv to control the VCA you'll get more wobble on higher notes (or invert to get more wobble on lower notes).

Use an envelope follower on the VCA to have another process, internal or external, control the amount of wobble.

Now you can try modulating the modulation modulators. In the random generator case above you could use another modulated VCA on that random signal to change the range of the random signal over time.

Rinse, repeat. - windchill

Source page

Modular Synths and tech

23.07.14

Tech

I've been running an experiment where I input the same phrase into a blank ChatGPT instance and searching in my browser bar. The differences are quite stark. Searching for 'What is the best way to convey a response is not needed today?' reveals drastically different results.

What is the best way to convey a response is not needed today? results in the following:

It's quite brutal. I'm glad that OpenAI provides the current iPhone app, though it is a less than ideal replacement for a terminal. Most interfaces with technology are worse without a keyboard. An idea for an app would be to create a keyboard driven interface for an iOS app.

this as a consultancy.

Modular Synthesis

I'll give this ChatGPT explanation for what modular synthesis is: Modular synthesis is a method of sound synthesis that involves using individual modules to create and manipulate audio signals. These modules are typically connected together using patch cables to establish the signal flow. One popular format for modular synthesis is Eurorack. Eurorack is a standardized format for modular synthesizers, developed by Dieter Doepfer in the 1990s. The format consists of modules that conform to a specified size, power supply voltage, and signal standards, allowing them to be easily integrated into a modular system. In a Eurorack system, each module performs a specific function. For example, there are modules responsible for generating sound signals (oscillators), processing signals (filters, envelopes, and effects), and shaping the overall sound (mixers, sequencers, and controllers). The modules are physically connected using patch cables, which determine the routing of audio and control signals between them. This flexibility allows for a vast array of possibilities, as you can experiment with different connections and combinations of modules to create unique sounds and effects. Modular synthesis with Eurorack provides a highly customizable and hands-on approach to sound design. Users can build their own systems by selecting modules from a wide range of manufacturers, tailoring their setup to their specific needs and creative preferences. It also offers the ability to expand and modify the system over time, allowing for continuous exploration and evolution of the sound palette. Overall, modular synthesis with Eurorack offers a powerful and versatile platform for sound creation, where users have full control over every aspect of the signal path, enabling limitless sonic possibilities and endless experimentation.

Is it a good explanation? Let me know!

What I've been doing is buidling a modular Eurorack synthesizer to perform the role of a lead or bass type instrument. It will have generative aspects as well as capabilities for a full techno(ish) setup. The full setup is less of a priority than the generative aspects.

My strategy for buying is modules that expand the potential of my setup. I want to buy a filter/oscillator/mangler module, then a bunch of modulation or utility modules. Sit on that for a lot of sessions and understand what is currently lacking and what I feel limits my creations.

The current setup is

Ideas

TODO: idea about software that you can use as a social media manager. Has rules and the like for automating things. The goal is to build a framework and sell that.

TODO: Talk about creating an LLM using Dolly, taking books you own, training it, then searching on it. Provide the model for others to use. Maybe build an app to allow people to run self hosted versions?

GameDev 003

23.05.24

I've been using a combination of MonoGame + ChatGPT to quickly iterate on an engine. Right now I've implemented rendering, sound, asset loading and a localization module.

One thing I've been trying to figure out is collision detection and constraint solving. Doing those two will enable me to procedurally animate characters and reduce the burden of custom artwork.

A recent post on Hackernews has this gem of advice for optimizing square roots that may be useful for my fixed point engine.

Sometimes you can even get away by dropping the square completely: if (x2-x1) - (y2-y1) < (x4-x3) - (y4-y3) then ... I once used this in a path tracer to speed things up a little. The results where less accurate but sometimes this can be used as trade-off.

Another fascinating article is Modding Age of Empires II with a Sprite-Diffuser. It seems like a fantastic way to quickly iterate on content. I'll have to work that into some form of procedural generation or animation.

Sts 02: Analysis

23.05.10

I've continued more into my analysis of Slay the Spire's source code. It's easy to critique things and say how I'd do it differently, but overall it it's pretty coherent. I think there's some optimiization where it could be made data driven rather then all done as classes, but I can understand why it was done the way it was.

The biggest thing to note is that there are a ton of strings used for ids. In my mind, it would make a bit more sense to make a struct and wrap some integers so that you effectively newtype each different id type. However, MegaCrit released a game, I did not. Props to them.

StS 01: Decompilation

23.05.08

Disclaimer: I will not post specifics when decompiling. I won't post original source code from Slay the Spire nor will I post assets. Steps will be generic and code will be obfuscated.

Reverse engineering has many forms, but for this project I'll be using a Java decompiler to get access to source code. Slay the Spire is a Java based game, and there are a plethora of tools used for decompilation. I won't go into specifics, but the first step I did was install Slay the Spire then decompile the .JAR files. This resulted in a lot of weird artifacts, but some original Java code. I'll be using that going forward.

When doing a new project, I like to start by separating my core code into a library that gets pulled into an application. The library only provides structs and functions, while the application provides the shell for input and output. For development I'll use a bottom up approach with heavy unit testing. My ultimate goal is to then compile the Nim files into C++ code and embedd it in Unreal engine.

This will get me a simple project built: mkdir shaman_core && cd shaman_core && nimble init shaman_core
From there I will create a makefile with the contents CORE=shaman_core/ test: cd $(CORE) && nimble test

Going through the decompiled source code, there's a shit ton of Java craziness. Tons of folders and subfolders fill the project, many serving important purposes but not to gameplay. The core folders I think I've found are _app_source/SlayTheSpire which contains all save data and preferences and _app_source/com/megacrit which contains all the logic. Don't get me wrong, there's a ton of beneficial stuff to be found such as card atlasses and spritesheets, but it's not relevant here.

After you get the source files, I strongly reccomend installing some IDE plugins to help with navigation. Going to source or definitions is going to be very useful in determining the top level of stuff.

This is an example bit of code that was ran through Chat GPT to convert it from Java to C#: if (target == null) { isDone = true; return; } powerToApply = (AbstractPower)new PoisonPower(target, source, amount); if (source != null) { foreach (AbstractPower pow in source.powers) { pow.OnApplyPower(powerToApply, target, source); } } if (target.HasPower("Artifact")) { addToTop((AbstractGameAction)new TextAboveCreatureAction(target, TEXT[0])); duration -= Time.deltaTime; CardCrawlGame.sound.Play("NULLIFY_SFX"); target.GetPower("Artifact").flashWithoutSound(); target.GetPower("Artifact").onSpecificTrigger(); return; } }

I'll be honest: this is not the way I'd write the code. I'm not sure if this is a result of the decompiler or if it's my lack of Java experience, but object oriented programming is not a paradigm I like.

Nim

23.05.07

Lately I've wanted to learn a new statically typed language, but have ran into some roadblocks setting up Haskell and Ocaml. While these were all surmountable, it made me not want to pursue those languages further. In my spare time, I want to iterate on things and not worry about setting them up. Rust, Elixir, C# have all been great in that aspect. Nim was pretty easy to set up as well, so I decided to do some learning with it. It can compile to C, C++ or JavaScript which is very appealing.

The best way to learn things is by doing, for this I'll start doing. This project will be about reverse engineering one of my favorite games of all time and coming up with my own inspired version. While I've never released a game, I've had some fun working with friends to build out prototypes.

Music

23.05.06

Developing my own language seems like a large undertaking. Love the idea, but not sure I love the concept of building my own parser, lexer, analyzer, optimzer, etc. It's a long list.

I do want to scratch my learning itch, so I'll likely pick a language other than Rust for high level application development. I want something that is very type driven and has good developer ergonomics. In my mind, there's a few options:

All are interesting, but not all are going to make the cut. I want to essentially build DSLs that generate code in other languages. Haskell or Ocaml are at the top of the list, but Lisp may make an appearance.

My primary use cases will be music driven, but not necessarily programming at the DSP level. Bitwig, for example, has a Java and JavaScript integration with thier controllers. I can already envision how I'd build a program using Haskell, so perhaps I'll go that route?

Linux

23.04.30

Recently I wiped my home PC and installed both Windows and Ubuntu for dual booting. It always highlights differences between platforms and forces one to pick and choose.

Another thing I've been working on is setting up a Hetzner server running a flavor of Linux. This will then host some sort of software, likely developed in Rust + Elixir. Ideally enabling real time collaboration.

A passion of mine has been game development of fighting games. I've read just about everything I can and the best use a technique called 'rollback netcode'. This requires deterministic behavior on clients and the ability to execute based on inputs. When a remote input is received and is for a previous frame, all inputs are replayed back. Imagine a collaboration framework that utilizes this. Minimal compute, some bandwidth, all state resides on clients.

Compilers

23.04.29

Rust is much easier to setup than Haskell. Every time I want to start a new Haskell project, I'm immediately reminded that there's no easy test suite and how it's academia focused. When you want to hack on something it makes it very difficult.

While one could argue that's a benefit, it makes it harder to do for my workflow. Perhaps I'm too familiar with Rust or Elixir to want to dig into something where I have to roll my own solutions. At that point I'd rather reach for C.

C has its own problems though, and I'd prefer to skip those.

CyboNTG is the new repo for this project. There's a lot of directions I can take this, but ultimately I want a language similiar to Haxe but tailored to my preferences. Do I go functional? Do I go imperative? I'm not sure yet but we'll see.

GameDev

23.03.28

I have a coffee, some metal and a laptop.

Well, it's been two hours and I have nothing to show. I played a bit with C++ but didn't get anywhere. The build system is not my favorite.

How to deal with this? Simple. I am going to sign up for a game jam and see how far I can go. There's several options, from using GameMaker, Godot, Monogame, Unity, Rust, C or Unreal. I'm not particularly interested in anything crazy, so I'll stick with what I know.

Ultimately I want cross platform abilities with ease of porting. I'm currently leaning towards a Monogame setup (as it's cross platform) with a C library that has a scripting language. I'll do a bit of digging for what scripting language I want but a single file would be ideal.

In summary:

We'll see what happens. I want a finished project I can release.

Nutritional Goals

23.03.27

One thing I've not done in a long time is track what I eat using an app. After hiking and climbing this last weekend, I realized that I want my fitness level to be far higher. I want to be able to climb a ton, not be winded climbing up tall hills and increase my confidence in my body.

Confidence? This includes aesthetic but also knowledge of limits and how to safely do things. Earlier this year I did something to my knee that prevented me from going at the rate I was before. It meant I cut back running, didn't climb, skipped legs. I signed up for physical therapy which I think has helped. How much? I can't say. I hiked around Smith Rock, did a small route and then went to the gym and climbed more this week. Seems effective enough.

Ultimately I want to set myself up for being able to do whatever I want. I'm not sure what that will look like, but cutting back on booze and watching what I eat seems like a good start.

LangChain Ideas and Smith Rock

2023.03.26

I've been using Chat GPT a bit to learn about machine learning. I'm trying to follow the advice of 'be product focused'.

Here's a list of tasks I'd like to do:

Riffusion

2023.03.01

I had an excellent jam session with my friend Matt. He showed me this site: Riffusion, which allows one to generate music using Stable Diffusion.

What I want to do is create a web UI, then use a local machine to power it. Maybe use Elixir as a remote webserver, then do jobs to process thing? I'll think on it.