spiffytech

I'm available for part-time work!

This morning I went down to the Apple store at opening so I could do a demo. I was 50/50 on whether it'd be just me, or a line around the block. An hour ahead of opening there was one other guy there. At quarter-til, when they started taking names, 11 people.

The 20 Apple employees applauded and cheered as we entered the store. It was very surreal.

TL;DR lots of good, some underwhelming. More compelling than I expected. Some hurdles might go away after an adjustment period. Not sold at $3,500, but if they release a non-Pro for say $1,500 that'll be pretty tempting.

Display unit

Display unit

My demo unit, served up like sushi

Physically, wearing it feels fine. I stopped even noticing I had it on after a minute or two. I tried both headbands, strongly preferred the single-loop one. Two-loop felt harder to adjust correctly, less secure, and less comfortable. Weight didn't bother me for the ~15 minute demo I got, hardly noticeable.

They stuck my glasses into a machine to measure the prescription, then selected from the ~100 sets of lenses they had on-hand. My prescription is mild, and I felt I could see clearly while using the device. A guy I met in line has a -8 prescription. Apple had lenses on hand that were either a match, or close enough. He had a good demo experience, but after the demo he found the transition back to his polycarbonate glasses jarring, since they have strong lensing effects.

After calibration (look at dots and click them) it dropped me straight into AR. It feels good, though imperfect. Significant motion blur if I turn my head while looking across the room. But holding still or moving slowly, I have no problem seeing everything around me. I don't feel isolated from my environment at all. Pass-through is noticeably dimmer than IRL. And foveated vision makes the scene seem less than completely clear in my peripheral vision. But I don't think I'd need to remove my device for anything unless I was simply done using it.

There was an Apple employee seated beside me running my demo. I found it easy to chat back and forth with him, and basically forgot we were separated by a screen and cameras (except when immersive mode was on, more on that below). It just felt like a normal conversation.

Graphics overlaid on top of pass-through are rock-steady. They stayed exactly in place, no matter how much I rocked or shook my head. Aesthetically, overlaid graphics integrated naturally into the scene. Felt like they were supposed to be there, like a natural part of my vision (as much as anything artificial can).

Positioning windows around me worked well, felt fine. I could place them in X/Y/Z space, and e.g. overlap behavior felt intuitive, even though they intersected at angles (in an arc centered on me). The device is smart about mixing windows into the environment. Initially a window was blocking my view of the guy running my demo. I moved it to the background and device started drawing the window behind the guy. I could also move windows forward to do the equivalent of sitting closer to the screen.

Pinch controls worked alright, a little finicky. For the first minute or so, I was naturally resting my off-hand in my lap in a pose that the Vision Pro mistook for pinching, but I quickly adjusted to stop accidentally clicking everything I saw. Pinch to click, pinch+drag to scroll, pinch both hands and pull/push apart to zoom. A few times the pinch didn't catch, unclear if it's because I need practice, because I pinched too soon, or if it's a technology thing.

The devices has two buttons: one for photo/video (did not use), and a crown that you spin to control immersiveness, and press as the back button, or to pull up the app list if everything is closed.

The whole focus-follows-eyes felt natural for me. But I went in ready, having seen reviews that people were accidentally looking away from whatever they wanted to click. But it was awkward, as there's a noticeable pause between when I saw something and when it became clickable. Unsure if this is a design choice or not (i.e., to keep the display from going nuts with focus events as you look around). Made it feel slow to interact with things, and I kept trying to click before it was ready. I'd probably get used to the timing in a day or so of use. I did learn that my eye flits around more than I realized.

The device tries to be smart about auto showing+hiding buttons and window controls, but I found it confusing. The handle to reposition windows, close things, change volume, whatever, was often not present when I wanted it and I was never sure what I was supposed to do to see it. Felt like sometimes it would appear if I stared where it was supposed to be, sometimes required a click, etc. But if I clicked, sometimes that did something else. Could just be a learning curve. Not enough time with the device to tell if its behavior was consistent in all situations, or if it was truly doing different things at different times.

They walked me through reading a web page, black text on white background with some images. It was rendered crisply, no pixelation. For (outdated) perspective, on my Oculus Rift I can barely read system menus. On the Vision Pro, I'm not sure I could see pixels at all. Foveation aside, I had no trouble reading anything, and I'll bet if I spent much time in an immersive scene I could nearly forget I wasn't physically there.

Yet it felt challenging to read the web page. The foveated rendering left me unable to pull information from my periphery, and I didn't realize how much I relied on that. I could only “see” things my eyes were pointed directly at. That made it hard to skim, or keep track of where I was on the page. I kinda feel like I had to read slowly and deliberately. As someone becomes proficient at reading, they stop reading letters and start skimming word shapes. The experience made me feel like I rely on skimming sentence shapes, too, and I couldn't do that – I could only really see the word I was looking at. Maybe I'd get used to this, I don't know. The actual field of vision was fine (how far around me the scene wraps); no worse than a motorcycle helmet, maybe better. First impression is I'd clearly rather do a workday with a laptop than with a Vision Pro, mostly because of foveation while reading text.

And again, the foveation was noticeable when looking around the room I was in. Everything else felt effortless to look at, and looked great. Apps, photos, videos, system UI.

Foveated rendering
This is what things looked like. From Tobii.

They had some 3D recordings, one from an iPhone and I think another from a Vision Pro, and some stuff that resembled National Geographic footage. The NatGeo-style stuff was 180° wrap-around, the rest was a flat plane like a normal phone video, only... 3D-ified. I felt I was present in the scenes in a way that's truly hard to describe. Closest thing is when I played the Oculus game Lone Echo, free-floating around a space station. Like I wasn't watching a video, I was filming the video, in-person. I can see this being very compelling for personal moments, and if the tech ever becomes widespread, I'd easily see it replacing 2D feature films (even without wrap-around view).

One scene, sitting on a lake shore during the rain, I cranked up to full-immersive and it felt incredibly peaceful. I would have just sat there in that scene relaxing for as long as I could, if they'd let me. Very therapeudic. Reminds me of when I had solitude at sunset at White Sands.

Immersive mode works great. Shut out everything around me, made the display my whole world. Passing through people's faces did not work well. I could barely see the guy running the demo, like some ghostly phantasm in the shadows. I tried turning down the immersive knob, but it just started letting in background details without making the guy beside me any clearer, until I was basically back to pass-through.

The knob for pass-through vs immersive has a lot of positions between the two, but I didn't see any point to the in-between. It felt like all knob positions were basically-pass-through or basically-immersive. I didn't feel enough of a gradual change to matter.

They didn't put the dinosaur app in the demo script. The guy I met ignored the script and went and found it, said it totally blew him away.

They did not have me do any photo/video capture.

Sound is very good, to the extent I could evaluate in that environment. It doesn't do noise-canceling, but it felt that way because when stuff played it felt like the Apple store's sounds disappeared. Partially the volume was set a bit high, but even after turning it down, it still felt like the headset was all I was listening to.

The light visor kept falling off in my hands when I held the device. It's only attached with a weak magnet (magsafe-like), and it disconnected any time I held the headset there. I'd probably still disconnect the visor all the time, even once I got used to holding the device by the bezel.

There were one or two moments when I tugged the battery cable trying to look around at 360 scenes, Battery pack was on the seat beside me, but the cable wound up running down my back where it was pressed into the backrest, so it didn't move freely. Probably not a problem most of the time, but I don't think it's practical to deliberately position it somewhere safe.

They didn't set up anyone's Persona for demos, so I couldn't see what the fake-eyeballs thing looked like from the outside.

I didn't try out typing, but I sure wouldn't like to do much of it with the pinch gestures. If I connected a keyboard it'd be fine.

I don't see this replacing a laptop, at least not until the foveating stuff is better. Even then, maybe not. It's supplemental. But it could easily become most of my leisure computing. I'd almost certainly prefer watching movies this way rather than using my laptop, TV, or a theater (if I'm not watching socially). And it would be incredible if scenes were filmed in that immersive mode, where it felt like I was really there. I've watched 3D movies and it's just not the same.

Seeing other people using the Vision did not give me a weird dystopian creepy feeling. Of course, they were all very animated, talking with their demo handlers and being excited to try new things. Might be different if they go all dead-eyed zombie WALL·E passenger. But there's a glimmer of hope that it won't be stigmatized like Google Glass was. Doesn't look dorky enough to be a problem either, but I'm a lot less sensitive to that aesthetic than others, so take that with a grain of salt.

ReadStuffLater uses emojis to tag content . It's simple, it's fun, and it affords basic content organization without encouraging users to spiral into reinvent-Dewey-Decimal territory.

screenshot of emoji tags
Yeah, the aesthetics need work

There's just one problem: data validation. When the client tells my server to tag a record, how can the server confirm the tag is actually an emoji? I mean, I shouldn't accept and store just anything in that field, right?

This is a much gnarlier problem than it has any right to be. If you want the TL;DR, see what I did and what I wish I'd done, and the a more technical solution!

Failed idea #1: Use Regex character classes

My first thought was to google around for this, and everyone recommends regex! Everyone! Well that seemed easy.

There is a recent(?) extension to regex that lets you specifically ask, “is this an emoji?”

Except it's wrong. And also not available everywhere.

const regex = /^\p{Emoji}$/gu;
console.log("🙂".match(regex))
console.log("*️⃣".match(regex));
console.log("👨🏾".match(regex));

> Array ["🙂"]
> null
> null

I mean, it produces kinda-okay results if you ask “does this string contain any number of emojis”. But it fails hard when you ask “Is this string made of exactly one emoji, and nothing else?”.

Also, it seems Postgres regex doesn't support these special character classes, so validation would be strictly at the application layer.

EDIT: Someone showed how to patch the holes in this approach and make it work. Check it out below!

Why does the regex give the wrong answer?

I'm glad you asked! It turns out there isn't really such a thing as “an emoji”. You have code points, and code point modifiers, and code point combinations.

A great primer on this is Bear Plus Snowflake Equals Polar Bear.

Here's the dealio: Let's say we want to display the emoji for a brown man, “👨🏾”. There isn't a code point for that. Instead we use “👨 ZWJ 🏿”.

ZWJ is “zero-width joiner”. It's a Unicode byte that gets used in I guess the Indian Devanagari writing system? But it's also a fundamental building block for emojis.

Its job is “when a mommy code point loves a daddy code point very much, they come together and make a whole new glyph”.

Basically any emoji that includes at least 1 person who isn't a boring yellow person doing nothing is several characters stapled together with ZWJ. Some other things work this way too.

Some examples include: 👪 (man + woman + boy), 👩‍✈️ (woman + airplane), and ❤️‍🔥 (heart + fire).

(And flags are multiple code points that aren't connected by ZWJ! ††)

(If your computer doesn't have current or exhaustive emoji fonts (thanks, Linux!), you might see what's supposed to be a single glyph instead displayed as several emojis side by side, like how my computer shows “Women With Bunny Ears Partying, Type-1-2” as “ 👯 🏻 ♀️”.)

So our regex can't just check if a string is an emoji: many things we want to identify are several emojis stapled together.

(The way you want to think about your goal here is “graphemes” and “glyphs”, not “characters”.)

Fortunately, when I experimented, it looked like you have to join characters in a specific order, so when you add both skin tone and hair color (“👱🏿‍♂️”) you can count on it happening in exactly one canonical byte sequence. Otherwise, we'd have to dive into Unicode normalization (a good topic to understand anyway!).

Edit: Someone showed me how to make this work. Check it out below!

Failed idea #2: Use Regex character ranges

Alright, so we can't just use the special regex “match me some emoji” feature. What about a regex full of Unicode character ranges? StackOverflow sure loves those!

Well, they're all either too broad or too narrow.

You get stuff like “just capture anything that's a 'Unicode other symbol'” (/\p{So}+/gu). This fails for the same reasons as approach #1, and also for the bonus reason that this character class includes symbols that aren't emojis ('❤').

Ah, but some other StackOverflow answer says to just use a regex for Unicode code points! That also fails the same way as approach #1, plus, nobody includes exhaustive code point ranges in their SO answers.

Here's a partial list of valid emoji:

Two things to note:

1) There are quite a few code ranges that include emoji! Not just handful that all the StackOverflow answers include. If you want zero false positives, you need (eyeballing it) a hundred code point ranges.

2) See all those grey empty spaces? That's non-emoji characters that are in those same code point ranges. You probably don't want to accept “ª”, “«”, etc. as emoji.

So you're either including a bajillion micro-ranges, or a handful of very wide ranges that will give false positives, or you're rejecting valid emoji.

And once you pick some ranges, I have no idea whether they'll include the new emoji the standard adds each year.

So validating code point ranges is a terrible approach. It's just plain wrong: emojis aren't individual code points in the first place, and you'll get a huge portion of false positives and false negatives.

Oh, don't forget that JavaScript uses UTF-16, while everything else in the world uses UTF-8. If you're building a regex with Unicode code points, all your hardcoded numbers will be different.

Failed idea #3: just stuff all possible emojis into a regex

Alright, so what if I just get a list of EVERY POSSIBLE EMOJI, and build a regex out of them like /🙂|😢|😆/. It's exhaustive, it's accurate, and it'll match individual, whole glyphs.

Except... *️⃣ broke my regex, because it's not its own symbol: it's a regular asterisk followed followed by other stuff: “* + VS16 + COMBINING ENCLOSING KEYCAP”.

VS16 is the Unicode byte that says “Hey, this character can either look like text or like an emoji, please show it as emoji”.

Regex wasn't happy about that – all it saw was a random asterisk in my pattern and it threw a fit.

I mean, even the markdown engine for this blog post mistook that as “please make the rest of my post italic” until I put the emoji into a code block.

But maybe I was on the right track trying to exhaustively match all emoji?

What worked for me

What I finally came up with was exhaustively validating emoji shortcodes instead of emojis themselves. Shortcodes are those things you type into Github or Slack to summon the emoji popup – e.g., “:winking_face:“.

The great part about shortcodes is they're strictly simple characters. Off the cuff, I think it's all a-z and _. Unsure about numbers.

That makes them super convenient to store or pattern match on. Not so convenient for other reasons (see the next section).

So when a user picks out an emoji, I find its shortcode and store that in the database. When I display an emoji, I convert the other way.

To build my allowlist, I found an NPM package that holds the same data as the emoji picker I'm using. I wrote a script to extract all the shortcodes, generate all the appropriate variants, and turned that into a SQL list of values I could copy/paste.

I stuffed that into a database table and foreign key'd my records to it. (I previously used a CHECK constraint using IN, but that made schemas very noisy.)

I wrote the output of that file to disk and checked it into source control. Now every time I build the app I generate the data again and compare against the oracle, so if the package's list of valid emoji gets updated, I'll get a build failure until I update my allowlist.

Problem: solved ✅

What I wish I'd done instead

I should have done basically the same thing, except with the actual emoji. I used shortcodes because I got caught up in path dependence with the regex stuff. But if I'm already using a data structure of discrete strings, why not just use the emoji themselves?

There's a modest advantage in network / storage efficiency (why store lot byte when few byte do trick?), but the real advantage would be simplicity.

In the emoji dataset I have, an emoji like “:people_holding_hands:” 🧑‍🤝‍🧑 doesn't have different shortcodes for skin tone or hair color. It's just “:people_holding_hands:“. Checking Emojipedia, I get the uncertain impression that shortcodes might not be standardized, and I see some tools have different shortcodes for skin tones, while others don't.

I had to make up my own encoding for that, including noticing the emoji might have zero skin tones (yellow figures), or multiple skin tones (two figures of different races).

I also have to do a lookup every time I display an emoji. In an ideal world, I'd lazy-load the emoji picker JS so it only downloads when the user actually wants to select an emoji.

But because I have to convert shortcodes to emoji, I have to load the picker Database on any page where I want to display an emoji, so I can figure out what glyph matches my stored data people_holding_hands:3:5.

If I were to revisit my implementation, I'd just store and validate straight-up emoji.

A more technical solution

Over on Lobsters, user singpolyma pointed out how to test a string without needing an oracle.

You use your language's tools to detect if the string is a single grapheme, and then you check if it either passes the Emoji regex character class, or contains the Emoji variant selector code point.

Here's what you do:

const isEmoji = (e: string) => {
  const segmenter = new Intl.Segmenter();
  const regex = /\p{Emoji_Presentation}/u;
  const variantSelector = String.fromCodePoint(0xfe0f);

  return Boolean(
    Array.from(segmenter.segment(e)).length === 1 &&
      (e.match(regex) || e.includes(variantSelector))
  );
};

On my test data set, 229 out of 3,664 emoji fail the regex test by itself, such as ☺️, ☹️, ☠️, 👁️‍🗨️. But all of those contain the VS16 Emoji variant selector byte!

This means you use the grapheme count to tell “does this look like one glyph to the user?”, then follow up with “does this either show as an emoji by default, or get converted into one?”. All the safety, no oracles!

Well... mostly. It does mean any byte sequence containing VS16 will be accepted, which isn't the same thing as a valid emoji...

Intl.Segmenter is available everywhere except Firefox. And Postgres cannot count graphemes or use the Emoji regex character class, so you can only do application-level data validation. But you're free from managing an allowlist, so there's that.


Footnotes:

Your emoji picker includes country flags, but the Unicode Consortium doesn't want to take sides on whether Taiwan is a real country.

So they dodged the issue: if your text includes a 2-letter country abbreviation encoded as emoji letters, it might or might not display as a flag, depending on how your device feels.

So you're free to include “🇹 🇼” in your text, and if you just so happen to be in a country that doesn't find Taiwanese sovereignty objectionable, you'll see it displayed as 🇹🇼. Otherwise you'll just see 🇹 🇼.

EDIT: New info from Lobsters: it looks like my information is outdated! Or maybe wrong! I mean the part about how flags are rendered is correct, but the “they don't say which flags are valid” part might not be.

At some point the list of acceptable country flags got enumerated. That file dates back to 2015, and references a Consortium task seeking to clarify what “subtags” are valid. That Atlassian task is newer than the Github commits, so I guess its timestamp is a lie, leaving me unable to tell how early the enumeration took place.

However, “depictions of images for flags may be subject to constraints by the administration of that region.”

I would have learned this factoid sometime around the Unicode 6.0 release in 2010, so maybe they started enumerating country codes later, or maybe I just learned wrong in the first place.


Why emoji and not a normal tagging system with arbitrary text?

I want ReadStuffLater to be a very low-friction, low-cognitive-overhead experience. It's not a place to organize your second brain; it's just read-and-delete.

Simply making rich organization available can make people feel like they're supposed to use it. And once people think that's the kind of app this is, they'll start expecting features that are expressly out of scope.

Yet once a user saves hundreds of links, they need something besides one giant list. This is my attempt to split the difference. And for product positioning purposes, I want to signal “do not expect this to be the same as Instapaper”.

If a more second-brain-flavored reading list is what you need, I recommend Instapaper, Pocket, or Wallabag. They're a take on this problem with a stronger focus on long-term knowledge retention.

My app ReadStuffLater fundamentally revolves around scraping web pages with the Microlink API. Sometimes that goes wrong: the target web page has a problem. Or Microlink does. Or the target throws up a captcha or blocks data center IPs or something.

I thought I'd done an alright job of handling all the cases that could go wrong. API errors are retried, website errors retry or do something sensible based on common status codes.

Yet I still get links that intermittently fail to scrape for no apparent reason. There are a couple usual suspects where I thought I'd handled all the failure modes, but they keep going wrong. And random pages have problems sometimes, too.

My strategy has been “notice failed scrapes, watch the logs while reexecuting the scrape, then fix whatever I see”. This has problems:

  1. Detecting a legitimate failure in hard. Sometimes a scrape unrecoverably fails for reasons outside my control. False negatives are real.
  2. This doesn't let me diagnose transient failures, where everything is working again by the time I manually verify the problem.
  3. It's a pain since I don't have good tools to peek into the scrape process. I'm always setting up something ad-hoc like console.log in local dev.

I think the right call here is an audit log. Every scrape would get its raw result stored in the DB. Status codes, body, Microlink metadata. Everything.

I guess I'd need a way to look up the audit records for a given link. A CLI script on the prod box would probably be okay. Maybe reformat the data in a way that's convenient to munge with jq.

The interesting question is: at what point should I have recognized that I needed an audit log?

Should I be logging ALL external API calls? What internal operations should I log? Do I just wait until I have problems and then start logging? I'm not sure.

The logical extreme of this is “just adopt event sourcing”. And yes, that would solve this problem. But what other problems would it cause? Maybe I should just adopt it piecemeal, and not for the whole system? But then I'm paying the implementation complexity cost for minimal benefit.

Idunno. All I know is right now I sure need an audit log for this one piece of the system 🙂

I was recently on my way out the door when I knocked over a glass of water, spilling it across my Framework laptop. I panicked and tried to dab it up, but saw that water had seeped under the keyboard and was leaking out the bottom of the laptop. The screen began to flicker.

I held down the power button and began disassembling the laptop. In minutes I had the laptop in pieces and ran a hair dryer over it. Water had gotten into many nooks and crannies; every time I tilted the unit a new direction water ran out from somewhere I had missed.

I dried up all the water I could spot, and left the unit open to air out for about 24 hours.

The next morning I put it back together and it works fine!

I don't think I'd have been so lucky with other laptops I've owned. They've all been difficult to open, or purposely designed to keep users out. I'd have been at the mercy of however well they drain, with little assurance of when (or if) it was safe to power them on again.

My Framework was trivial to open, even while stressed and anxious, and I had the comfort of knowing that if it did break, I'd probably only have to replace the mainboard, and not the whole laptop.

disassembled laptop

My app includes content areas that expand and collapse. A lot like accordions, except they take up the whole page and can be huge.

When you open one, whatever's open gets closed, and that makes whatever you just clicked on jump around as the previous content area stops taking up space on the page.

Here's a code snippet I put together that keeps whatever you just clicked at the same spot in the viewport after the layout shift:

/**
 * This ensures that the element is at the same position within the viewport
 * after a layout shift.
 *
 * MUST be called BEFORE triggering the layout shift.
 *
 * IT can only do so much - if the layout shift cuts off enough content, the
 * element will still wind up positioned higher in the viewport than before.
 */
export const retainScrollPosition = (el: Element) => {
  const targetViewportPosition = el.getBoundingClientRect().top;

  requestAnimationFrame(() => {
    const newPagePositon = el.getBoundingClientRect().top + window.scrollY;
    window.scrollTo({ top: newPagePositon - targetViewportPosition });
  });
};

We have to pick a new health insurance plan this month, and we've had a tough time making the decision.

You can't just add up what you'll spend – what each thing costs depends on how much you've already spent!

And some things are inherently probabilistic – will I go through procedure X this year? How many visits will I need for condition Y? How many urgent care visits?

So complex and uncertain!

Inspired by vaguely recalling that I read Lucas F. Costa's blog post some time ago, I applied the Monte Carlo method to my health insurance decision.

I have a simplistic understanding of Monte Carlo simulations:

  1. Assign probabilities to everything that can happen in your scenario
  2. Randomly selecting outcomes for each possible event, then repeat the calculations a gazillion times
  3. Measure how things typically play out

It can get much fancier (hello, MCMC!) but I think that's the gist of it.

I put together a simple TypeScript file with some arithmetic operations and calls to Math.random() and ran it with Bun. I punched in all the reasons my wife and I will or might spend on healthcare, added in the premiums, and took the average result.

Surprisingly, the expensive plan will save us a couple thousand dollars this year, even accounting for the higher premiums.

I feel better about the decision since I did something resembling rigorous calculation of which plan is best. Usually I just guesstimate and anxiously hope for the best.

I give 100% effort. But I have no sense of moderation – I'm on or off.

I can do great work when I care about something. But if I don't, I can barely work at all.

I don't take credit for work that wasn't mine. But I can't feel work satisfaction just because I'm in the same team or company as the person who did something cool.

I can understand some things very clearly and deeply. But I can't believe something or change my mind just because someone insists I should.

I can do excellent work when I understand the assignment in detail. But when I don't, I can't even get started.

I have very broad interests, and can get excited about many subjects. But I can't stay focused on any one subject very long.

I aspire for my work to be excellent. But I'm inflexible, opinionated, stubborn, and the pickiest eater you'll ever meet.


Every neurodivergent trait that makes me stronger also makes me weaker. A coin with two sides. But sometimes people don't understand that; they imagine that their own capabilities are a baseline that I can just add my own strengths on top of.

My weaknesses are an integral part of me. You can't have half of the coin.

Dokku has a Let's Encrypt plugin which works behind Cloudflare. There's just a little bit of chicken-and-egg setup involved.

Let's Encrypt needs to connect back to your server to validate ownership of your domain. You can't have Cloudflare's “full” TLS mode enabled when you're doing first-time validation, because in “full” mode Cloudflare will error out, failing to establish a TLS connection to your not-yet-TLS backend server.

You could disable “full (strict)” TLS mode in Cloudflare, but then you'll take all your sites down: Dokku does HTTP –> HTTPS redirects on all sites configured with TLS, and will thus reject the non-TLS inbound connections from Cloudflare's networks. Or more accurately, it'll receive an inbound HTTP request from Cloudflare's servers, return a redirect to HTTPS, which Cloudflare will pass on to the client, but the client is already at an HTTPS URL, so the client will enter an infinite redirect loop.

You can get around all this during first-time setup by disabling Cloudflare's proxying behavior on your domain while you get Let's Encrypt set up on Dokku. After it's set up, you can turn Cloudflare proxying on, and cert renewals should work fine, since Let's Encrypt validation checks routed through Cloudflare can still establish end-to-end TLS while your certs remain valid.

While at work I was upgrading graphql-code-generator, and found that the generated code no longer exports the type for a query node. Instead, the type of the whole query response was exported. But I had code that relied on having the node type available.

To solve this, I created a new type by reading attributes from deep in the response type.

interface Foo {
    bar: {
        baz: {
            zap: {
                zoop: number;
            }[];
        };
    };
}

type zoop = Foo['bar']['baz']['zap'][number]['zoop'];

If some of your properties are optional, you can use Required<T>. If they're possibly undefined, use Exclude<T, undefined>.

Recently, while building a simple Reddit clone, I wanted to lazy-load images and comments. That is, rather that loading all of the images and comments the instant I added a component to the DOM, I wanted to wait until the component was actually visible. This spreads out the impact of loading a page, both for the client and the server.

Read more...