A collection of notes and half-baked ideas and other things I wanted to write down to help clarify in my own mind or explain to somebody else.

I’m also quite bad at making a coherent point, so my hope is that by writing things down I’ll be able to iterate on them until they begin to make sense. One day. Maybe.

Posts

  • Trying to improve mipmap blending

    LINEAR_MIPMAP_LINEAR, blending from one mipmap level to the next, is an interesting bodge. It approximates a dynamic cut-off frequency with linear interpolation between adjacent log2 cut-off frequencies.

  • Light field aperture synthesis from drone flight path

    For a low-rate-of-change scene like a construction site, I think it should be possible to use a drone with a camera on it to repeatedly fly through the same grid pattern in the sky, where that grid represents a large aperture, collecting compound images as if it were a light-field camera.

  • Hybridising zip files with an alternate compression scheme

    An interesting feature of zip files (and consequently many formats which are just zip files renamed) is that all the identifying data appears at the end of the file with links back to content earlier in the file. This makes it possible to leave gaps in the middle or even at the beginning of the archive, which is why a self-extracting zip file can be decompressed on any architecture by using a native zip extractor.

  • Heatmap-based triangulation

    Many years ago these little Bluetooth object tracker gadgets hit the market (eg., Tile, TrackR, Chipolo, and eventually AirTag), and I began to wonder if it could be made easier to find them in a messy room, or long grass, or whatever, doing better than the RSSI-based numerical distance estimate that you could get on your phone.

  • stochastically ordering items by frequency

    If you believe your data will fit a curve, then, like stochastic counters which conditionally increment based on a probability suitable to the conditions, you could perform stochastic swaps to conditionally promote items up the list based on a probability. I guess the theory includes a flat probability but I don’t see the point in that.

  • Address-coded endian

    Many years ago I found in an old Arm manual a description of the permutation that would happen to a 32-bit word loaded from an un-aligned offset. I don’t recall what the pattern was, but it looked like the barrel shifter had been repurposed to handle sub-word addressing of bytes and then masking deferred because it was a full-width read, even though the address was rounded down.

  • Naturally-aligned compressed opcode encoding

    In the world of fixed-length instruction sets, the addition of compressed instructions (no-longer fixed-length instructions) is kind of a nuisance. Arm dropped them when they went to AArch64, and there was a proposal to make them optional in RISC-V but it didn’t fly.

  • Hinting train of branch directions in advance

    Long ago I had to do a scaling operation which would yield a stream of branch decisions based on whether or not a row pointer would advance and need the subsequent row to be recalculated. I decided to try to calculate the branch decisions well in advance, and shift them into a shift register and then to branch on the least significant bit of that register and then do a right-shift to schedule the next branch decsion. I think the code is in AOSP somewhere.

  • Digging in to Pingback and Webmention

    The other day I found out what a “Pingback” was, after I saw a bunch of them at the bottom of a widely-cited blog post. So I set about digging in to all that.

  • Font-based digit grouping

    Struggling to read absurdly large numbers in terminal windows many years ago, I wondered how I might apply some logic in the terminal to subtly bunch together groups of three digits as a form of thousand separator. Eventually it occurred to me to try doing it in the font with ligature rules (initially as a joke, but then I looked into it) and it turned out Numderline was a thing which already did that.

  • Neighbourhood sampling order during texture filtering

    During my tinkering with pixel-art scaling I found a source of noise which comes up when trying to use the derivative operations in shaders. It’s somewhat my own fault for doing thresholding excessively, but still…

  • Scaling pixel art with SDF

    For a while I’ve been wondering if giving a fantasy console like PICO-8 a graphics system defined in terms of more scalable low-memory primitives would lead to a more interesting retro style than the conventional sharp-edged squares. Not concepts like low-poly 3D or minified SVG (eg., RIPscrip), but maybe using pixel art scaling algorithms natively, or with additional markup in the data to clarify intent (sharpening corners, for example). Simply bake them into your pixel-art editor so you see what you’re going to get once it’s upscaled, with the option to correct it where it goes awry.

  • Embedded web content without all that JavaScript cruft

    In an earlier blog post I wanted to embed some diagrams. Really I wanted SVG for clean scaling between devices, but I don’t have that format as an output option.

  • A pick-resistant lock design

    There are a fair few attempts at “unpickable” locks out there, and overall I don’t think I have a great deal to add to the range of pre-existing methods. But a couple of years ago I thought I might have a go at my own design anyway.

  • Error resilience in rolling-intra

    Rolling intra is where a portion of each coded video frame is encoded without reference to any previous frame, guaranteeing that that part of the scene can be reconstructed when joining or re-joining a stream without previous context.

  • Digits for a base-210 system

    My own contribution to the field of making up improbable alien number systems involves counting in base 210. Rather than try to define 210 distinct symbols, or to make a 14-upon-15 or 15-upon-14 digit pair system, I chose to break each digit down into three components mod 5, mod 6, and mod 7.

  • Optimising Adler-32 checksum

    The Adler-32 checksum consists of two 16-bit sums. One is the sum of all the bytes in the data, plus one, and the other is the sum of all those intermediate sums, which works out to be the same as the sum of all the bytes multiplied by their distance from the end of the buffer plus the length of the buffer. Both modulo 65521.

  • On the endianness of bitstreams

    Where I’m from conventional wisdom has always been that while it’s an objective fact that little endian is the one-true-and-correct endian, bit-streams should always be packed as big-endian data.

  • Curl is not secure by default

    I often find that when there’s been some debate about a topic on the internet the conclusion might be a thorough debunking of a bad idea but also a failure to address a more nuanced idea – or sometimes even a more fundamental idea.

  • What I've learned so far about CSS, SVG, Liquid, and Jekyll

    From the point of view of knowing almost nothing about how any web techology works, here’s a bunch of stuff I had to pick up to draw a picture in a GitHub-powered blog.

  • A Latin-Square-based L1 cache layout

    If I were going to build some kind of fantasy machine I would not fuss about making it high-performance. Mine would just be a mash-up of random things I found interesting at some point.

  • Bit-sliced matrix multiply

    There’s a matrix multiplication in the middle of a lot of popular things, right now, and it’s an operation which frequently tolerates remarkably low precision.

  • Pivot-sorting quick sort

    Regarding those quick-sort variants which do things like median-of-k pivot selection before subdividing…

  • Hash map reduction operations

    I notice that some defaults for things like STL unordered_map are a bit scary. The function to reduce the range of the hash down to the size of the table is remainder, and the default hash for integers is the identity function.

  • A reasonably effective hash instruction

    As per part one of this post, I have reasoned through and tested a proof of concept for generic CPU instruction which could achieve most of the required mixing of a good hash function in a single cycle (working with a combinational depth of 20 to 24 gates as a guide).

  • Why I want a dedicated hash instruction

    A good hash function is expected to remove correlations of various sorts between subsets of inputs and their hashed outputs in order to mitigate the risk of those correlations producing performance aberrations (cache collisions, unbalanced structures, etc.) or statistical biases.

  • Idly musing over RowHammer mitigation strategies

    Watching a RowHammer talk (slides) a while back (not actually the linked one, but I couldn’t find the one I attended) left me with a couple of thoughts about possible mitigations which I didn’t see discussed.

  • Machine innervation

    Looking at the faults my espresso machine develops over time, I’ve often wondered if fault detection and error messages could be better. It’s probably safe to assume that the cost of including sensors for all possible faults and developing code to check and understand all those sensors is prohibitive and unreliable, so I’ve wondered about a different approach.

  • N-bit Mixer Functions

    For reasons I intend (hah!) to go into in a longer post I was looking for operations with which to construct and whiten (or “finalise” or “mix” or “avalanche” or whatever) RNGs with 2n periods. I was looking at splitmix64 which led me to Fast splittable pseudorandom number generators which cited Improving on MurmurHash3’s 64-bit Finalizer (although I found that page by other means) and finally MurmurHash3 which tells me where those constants and that construction both come from.

  • Minimally biased NPOT-ranged random numbers

    In my previous post I described a technique for scaling random numbers to new intervals producing extremely small bias but with a constant execution time.

  • Random number scaling

    I’ve been doing some messing around with random number generation at work, and this inspired me to revisit something which has troubled me since my teenage years: Perfectly unbiased random numbers over a range which is not a power of two.

subscribe via RSS