Lately, I’ve been doing some digging into captive portals - you know, these things that pop up when you try to connect to the WiFi at a coffee shop or hotel. In essence, they are very simple:
success
message in the body), which tells the system that you’re online.Now, I’m a massive UniFi fan. Over the years, I’ve migrated most of my networks over to UniFi equipment with Dream Machine Pros as the firewall. It’s a great turn-key solution that largely just works. Prior to this, I usually relied on either a pfSense (and later OPNsense), but frankly, the UDM is just a great device that integrates seamlessly with all other UniFi hardware. That means provisioning a whole network is done in a few clicks.
Bringing this back to the topic of captive portals, UniFi’s console does allow you to provision captive portals out-of-the-box (called ‘Hotspot Portal’). However, their support is rather limited. Usually, the reason any commercial WiFi deployment would want to use a captive portal for their WiFi is to capture email addresses (for marketing), or to set tracking pixels (for remarketing). UniFi’s default captive portal does not allow you to do either of this, making it somewhat moot to enable.
What they do support, however, is something they call ‘External Portal Server,’ which allows you to integrate a third-party captive portal with the UniFi hardware. This is all neat until you start digging into the details. Or should I say, the lack of details. You see, UniFi provides absolutely zero documentation on this. There are, however, third-party services/libraries that have reverse-engineered how this flow works, but it’s less than ideal.
What you notice when you start digging into these third-party tools is that they all ask you for admin access to your UniFi Console (which also needs to be publicly accessible). That should be a pretty big red flag right there, as this essentially gives them full control over your network. In theory, they could own your entire network and do all sorts of mischief, like redirecting gmail.com
to a phishing site.
There is, however, a good reason why they ask for this. As it turns out, this is required for an External Portal Server to work; they need the ability to approve guests by issuing an API call to the console.
When a user tries to access the WiFi, a GET request is sent to the external server that looks like this:
http://externalportal.example.com?ap=access_point_mac&user_mac=user_mac_address&ssid=network_ssid&url=original_url_requested
Notice all those GET parameters:
With that information, the external portal then needs to issue back a POST request to the console that looks something like this:
POST /api/s/<site_name>/cmd/stamgr
Authorization: Bearer <API_Token>
Content-Type: application/json
{
"cmd": "authorize-guest",
"mac": "<user_mac_address>",
"minutes": <authorization_duration>
}
Notice that we need to pass the MAC address back, along with how long the session should be open.
Now, there are a few problems with this:
/api/auth/login
with a set of admin-level credentials).As you can see, this is why these tools need both direct access to your UniFi console and a set of credentials. Some of them also use this to provision the External Portal Server configuration, which is somewhat neat.
The short answer is that the only sensible way to do this is if you run your External Portal on the local network. It’s still far from ideal, but that way, you can at least avoid having to expose your console to the public internet. It wouldn’t be very difficult to implement the above flow, but I don’t think anyone who cares even the slightest about security should a) expose their console to the public internet and b) hand over admin credentials to a third party.
What I’m really hoping for is that UniFi will allow API access to their cloud-based console (unifi.ui.com), and provide proper API documentation. Currently, the implementation is prone to breakage, especially since the API isn’t even versioned. Additionally, the lack of ACL/permissions for user accounts is a significant issue. Ideally, one should be able to issue an API token that can only approve clients. Solving these issues would make it viable to offer this as a cloud-based service.
For now, I might develop a simple Python application to test the waters, as you can access the console’s API over the LAN. Or I might just give the NodeJS implementation below a go.
My initial idea of creating a SaaS product around this (I even bought the domain CaptivePortalConnect.com for the purpose) is likely not worth pursuing due to the fragile (and insecure) approach required.
Seeing the Mac App Store as a big opportunity, we decided to jump in and create something unique for its launch.
That’s how we came up with Blotter. It was a simple yet smart idea: a calendar that sits right on your desktop wallpaper, always visible but never in the way. It combined functionality with sleek design, but pulled all data from the native macOS applications (Calendar and Reminders), to avoid having to re-invent the wheel.
Blotter was ready for the Mac App Store’s first day and it was a hit. It ranked among the top 10 productivity apps in the US, only beaten by Apple’s own applications briefly. We also made the top 10 store wide in numerous location worldwide. Over the course of a few years, we kept improving Blotter, fixing early shortcuts in the implementation (with a lot of help from Ilya Kulakov). Blotter on the Top 10 list in Productivity, only beaten by Apple’s own applications.
One big issue we faced was the Mac App Store’s rule against charging for app updates. This meant we couldn’t earn from new versions of Blotter unless we released it as a completely new app. This was a massive challenge for us, as it really limited out options. Mind you, Blotter was a few years by now, and there was willingness to pay for an updated version.
Eventually, we moved Blotter into maintenance mode, focusing only on essential updates due to the above limitation.
It took until macOS Sonoma in 2023 for Apple to introduce a feature like Blotter as part of the system, which felt like a nod to what we had built.
From Blotter, I learned a lot about the power and limitations of selling through an App Store. It’s a great way to get your app out there, but it also puts a cap on how much you can grow.
Looking back at the success of Blotter, it really came down to two things:
Looking back at it, I’m not sure we could have done much more with Blotter. The biggest lesson was the lack of recurring revenue. Some macOS apps have tried to solve this by charging a subscription model, but I’m personally not a fan of this Apps I used to love (like Git Tower) adopted this and I stopped using it. While I happily pay for plenty of SaaS products, I just mentally struggle with a recurring charge for a desktop applications.
In chapter 3, which will be the final chapter, I will unpack our experience with bootstrapping Screenly.
]]>I’ve written a few short articles about my experience bootstrapping businesses. I’ve both bootstrapped and raised money (both sides of the pond). These are my learnings (with context) and hopefully, they are helpful for others.
The first question one might ask is when should you bootstrap. In my experience, it makes sense for low-to-mid complexity products (or if you have deep pockets). There are no doubt products and services that you cannot bootstrap, such as deep tech with a slow time to market, training AI models that will run you millions of dollars in compute resources, or where you aggressively need to pay for market share to corner a market.
If you’re building a relatively straightforward product, it makes a lot of sense to bootstrap. In fact, it probably doesn’t make sense to raise money if you (and/or your team) are able to write some or most of the code yourself to generate the first revenue.
Put another way, why would you want to give up control of your company for a small angel round? The second you accept the term sheet, you’ve picked your path. It will be challenging to change it, as you need to convince others. If you bootstrap, you are in control of your destiny. That, and the fact that you don’t need a $100m exit to change your life. If you sell for $10m bootstrapped, you will often pocket more than you would at $150m if you take VC money (as you likely be at Series C at least, and are heavily diluted). You will, on the other hand, make a lot of LPs and fund managers rich if you sign that term sheet and end up selling. I’ve seen entrepreneurs close $100+ million exits and walk away with just a few hundred thousand, and bootstrapped businesses sell for just north of $10m where the founder kept it all.
The worst of both worlds is to raise money at shitty valuations, get a small amount of money, and yet live like you’re bootstrapping (yes, I’m looking at European angels/VCs). If you’re raising money, and they tell you that your salary should be rent + 10% or whatever, do yourself a favor and walk away.
It’s one thing to be broke while bootstrapping your own business (I’ve been there), but there’s absolutely no reason to do that while making someone else rich.
While still in college in the early 2000s, Alex and I created our first business called YippieMove. It was an email migration tool that targeted students. The idea came when graduating, and I realized my account would be shut down and my email would be wiped (this is probably no longer a problem). The idea was to offer cheap email migration for students ($10). As an experienced entrepreneur, it’s easy to point out countless flaws in our plan, but it did give us a taste of fundraising in the valley. (My biggest regret from this was probably not applying for YC, as we would have been in the same cohort as Airbnb.)
Fast forward a few years, and we ended up pivoting to self-served email migrations for SMB and colleges (we had Harvard and a few other big names as customers). In the end, it was a business that flatlined early. Our only saving grace was that we rode on a wave of Google Apps (now Google Workspace) adoption, and Google didn’t have any tooling for this.
What I learned from this endeavor was a few things:
I also got to know Kevin Henrikson, who (fortunately) passed on investing in YippieMove, but turned out to become a friend, mentor and board member for Screenly.
I’ll continue my lessons in Chapter 2 with my next experience with building Blotter.
]]>First, let’s talk about my new project, “Nerding Out with Viktor,” a video podcast where I dive into riveting tech discussions with experts from various fields. The first episode features a deep dive into Cloud Native security with Andrew Martin from ControlPlane. We cover a range of topics, including ethical hacking and cybersecurity strategies. It’s a real treat for anyone keen on tech and security. You can catch the podcast on platforms like Apple Podcasts, Spotify, and YouTube. Make sure to subscribe and join in on the conversation!
Also, I’m excited to introduce the Podcast RSS Generator, a tool I developed for podcasters (or rather, for myself). It’s designed to generate RSS feeds for audio/video podcasts, especially useful for those using S3, Google Cloud Storage, or Cloudflare R2 for hosting. This tool stands out in its field, being possibly the first of its kind and is even featured in the GitHub Actions Marketplace. It’s adaptable to various storage solutions, making it a versatile choice for podcast creators. Check it out in the GitHub Actions Marketplace.
Stay tuned for more exciting episodes and updates on these projects! Your support and feedback are always appreciated. Let’s keep nerding out together!
]]>I scoured forums and guides for ages, trying to find a way to kill this feature. It felt like a quest without a map, because, let’s face it, who knows where to look for such obscure settings?
Today, I hit my breaking point. I dove back into the digital trenches and finally struck gold in a blog post. Turns out, this infuriating feature has been lurking in macOS for several iterations. Why, Apple, why?
But here’s the silver lining – the way to reclaim your peace:
Boom! No more pop-up sneak attacks. Sanity, welcome back to my digital realm.
]]>I’ve been a fan of ZeroTier for some time and use it both personally and professionally to access nodes behind firewalls. Recently, I kept hearing from more and more people how much they love Tailscale. After hearing @jnsgruk’s glowing reviews of Tailscale at Ubuntu Developer Summit, I decided to give it a try.
It’s clearly a much more refined product than ZeroTier. The user interface and user experience are smoother, and it feels more production-ready. After enrolling a few nodes, I was convinced it was time to start migrating.
There isn’t much to write about how to add a machine to Tailscale, as that part is straightforward. What I will focus on in this article is how to use two really neat features in Tailscale to solve the problem in the opening paragraph:
Together, this is a game changer.
For some time, I’ve been planning to secure my local web services (Home Assistant, Proxmox etc) with self-signed SSL certificates issued by a local CA (using cfssl). However, this opens a Pandora’s Box of security issues. As you now need to trust a local self-signed CA on all your machines, this could, in theory, be exploited for some really nasty MiTM attacks if someone were to get their hands on the root CA keys.
As it turns out, Tailscale solves this for me, but instead of using self-signed certificates, I get proper certificates issued from Let’s Encrypt.
The way it works is rather elegant. When you enable MagicDNS, you get a domain assigned (e.g. foobar.ts.net
). All your devices will then get a hostname there (e.g. my-server.foobar.ts.net
). Since this is a valid FQDN, Tailscale can use it to issue proper certificates from Let’s Encrypt with the command tailscale cert my-server.foobar.ts.net
.
Just like with regular Let’s Encrypt certificates, these are semi short-lived and thus need to be renewed periodically. As such, we need to automate this renewal on all our hosts with a simple systemd service:
# /etc/systemd/system/tailscale-cert.service
[Unit]
Description=Tailscale SSL Service Renewal
After=network.target
After=syslog.target
[Service]
Type=oneshot
User=root
Group=root
WorkingDirectory=/etc/ssl/private/
Environment="HOSTNAME=my-server"
Environment="DOMAIN=foobar.ts.net"
ExecStart=tailscale cert ${HOSTNAME}.${DOMAIN}
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/tailscale-cert.timer
[Unit]
Description=Renew Tailscale cert
[Timer]
OnCalendar=monthly
Unit=%i.service
Persistent=true
[Install]
WantedBy=timers.target
With these two files created, you can manually start the service to ensure it works:
$ systemctl daemon-reload
$ systemctl start tailscale-cert.service
$ systemctl enable tailscale-cert.timer
If everything went well, you should now have your certificates in /etc/ssl/private
.
We can then set up a basic Nginx configuration to expose an internal service running on say localhost:8080
by editing /etc/nginx/sites-enabled/default
(or similar):
server {
listen 80 default_server;
listen [::]:80 default_server;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
server_name my-server.foobar.ts.net;
ssl_certificate /etc/ssl/private/my-server.foobar.ts.net.crt;
ssl_certificate_key /etc/ssl/private/my-server.foobar.ts.net.key;
# Consider including the hardening config that Let's Encrypt
# recommends for enhanced security.
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://localhost:8080;
}
}
You should now be able to navigate to your server using a proper SSL certificate at the address https://my-server.foobar.ts.net. Pretty neat.
There’s even an official Nginx auth that can be used. I haven’t experimented with this yet, but it could presumably be used to further secure your Nginx reverse proxies.
Below are some notes that I encountered while setting this up on various systems.
By default, Home Assistant will complain if you try to access it over a proxy. To overcome this, you need to add this in configuration.yaml
.
http:
use_x_forwarded_for: true
trusted_proxies:
- 127.0.0.1
You may also need to tweak your Nginx config for working with WebSocket.
If you’re intending to connect to a MariaDB/MySQL instance using TLS, the official documentation is incorrect. As this forum post correctly points out, instead of ;ssl=true
, you need to append &ssl=true
.
However, note that Home Assistant does not actually verify the SSL certificate, thus making it vulnerable to a MiTM attack.
To use InfluxDB with TLS, you need to make the following changes to your Home Assistant YAML file:
influxdb:
api_version: 2
ssl: true
host: influxdb.foobar.ts.net
port: 8086
Setting up Tailscale on Proxmox was just like any other system. The somewhat tricky part was to consume the certificate. To accomplish this, I added the following line after ExecStart
in /etc/systemd/system/tailscale-cert.service
, which instructs Proxmox to use the certificates issued from Tailscale:
ExecStartPost=pvenode cert set /etc/ssl/private/${HOSTNAME}.foobar.ts.net.crt /etc/ssl/private/${HOSTNAME}.foobar.ts.net.key --force 1 --restart 1
Somewhat unrelated to the certificates, but in order to install Tailscale on an LXC container, you need to run it in privileged mode as per this document and add to the config:
lxc.cgroup2.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
Note that you can’t move a container to privileged mode from unprivileged, as this will break file permissions and other things in the container. The way to accomplish this is to take a backup of the container, and then restore the container as privileged.
To use the certificate with MariaDB/MySQL, we need to modify our setup slightly to accommodate permissions.
# /etc/systemd/system/tailscale-cert.service
[Unit]
Description=Tailscale SSL Service Renewal
After=network.target
After=syslog.target
[Service]
Type=oneshot
User=root
Group=root
Environment="HOSTNAME=mariadb"
ExecStart=/usr/local/sbin/tailscale-mysql.sh
[Install]
WantedBy=multi-user.target
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
mkdir -p /etc/mysql/ssl
tailscale cert \
--cert-file /etc/mysql/ssl/cert.crt \
--key-file /etc/mysql/ssl/cert.key \
${HOSTNAME}.foobar.ts.net
chown mysql:mysql -R /etc/mysql/ssl
chmod 0660 /etc/mysql/ssl/cert*
With this done, we just need to tell MariaDB/MySQL to use these certificates:
# /etc/mysql/mariadb.conf.d/50-server.cnf
[...]
# We need to point to a CA path instead of the CA file since we
# are using a proper certificate.
ssl-capath = /etc/ssl/certs/
ssl-cert = /etc/mysql/ssl/cert.crt
ssl-key = /etc/mysql/ssl/cert.key
require-secure-transport = on
# We don't want to allow lower ciphers than 1.2 as per NIST etc
tls_version=TLSv1.2
[...]
Finally, restart the service with systemctl restart mysql
.
You might also want to recreate your MySQL user and add the constraint REQUIRE SSL;
to ensure the remote users only can connect using TLS (even though that should in theory also be done using require-secure-transport
).
Much like MariaDB/MySQL, setting up InfluxDB requires a bit extra work with permission.
We start with a unit that fires a Bash script
#/etc/systemd/system/tailscale-cert.service
[Unit]
Description=Tailscale SSL Service Renewal
After=network.target
After=syslog.target
[Service]
Type=oneshot
User=root
Group=root
Environment="HOSTNAME=influxdb"
ExecStart=/usr/local/sbin/tailscale-influxdb.sh
[Install]
WantedBy=multi-user.target
We then create the bash script:
#!/bin/bash
set -euo pipefail
IFS=$'\n\t'
mkdir -p /etc/influxdb/ssl
tailscale cert \
--cert-file /etc/influxdb/ssl/cert.crt \
--key-file /etc/influxdb/ssl/cert.key \
${HOSTNAME}.foobar.ts.net
chown influxdb:influxdb -R /etc/influxdb/ssl
chmod 0660 /etc/influxdb/ssl/cert*
Upon starting this script, we should now get our certificates in place. The only thing we need to do now is to edit the configuration file to have it consume our certificate:
# /etc/influxdb/config.toml
bolt-path = "/var/lib/influxdb/influxd.bolt"
engine-path = "/var/lib/influxdb/engine"
tls-cert = "/etc/influxdb/ssl/cert.crt"
tls-key = "/etc/influxdb/ssl/cert.key"
Restarting the service will automatically make InfluxDB serve content over HTTPS.
I’m yet to add support for this. It should certainly be possible, but BSD isn’t an officially supported platform. Moreover, because OPNSense uses its own CA for things like VPN configuration, it might be a bit more challenging.
Configuring CUPS to use Tailscale was rather straightforward.
After installing the systemd unit and timer, you should have your certificate available.
Next, open up /etc/cups/cups-files.conf
and add CreateSelfSignedCerts no
to prevent CUPS from issuing its self-signed certificates.
Next, delete all existing self-signed certificates by running sudo find -type f /etc/cups/ssl -delete
.
We now need to set up a symlink to our existing Tailscale certificates to a place where CUPS is looking for them (i.e. /etc/cups/ssl
). This is done by running:
sudo ln -s /etc/ssl/private/my-box.foobar.ts.net.ts.net.{crt,key} /etc/cups/ssl/
Finally, we need to make some tweaks to /etc/cups/cupsd.conf
:
ServerAlias
stanza to match your Tailscale hostname, such as ServerAlias my-box.foobar.ts.net.ts.net
Listen
stanza to only listen on the Tailscale interface, by doing Listen my-box.foobar.ts.net.ts.net
Browsing On
so that we can easier find the printersLastly, we need to edit all the relevant <location>
blocks and add allow all
to the configuration. Initially, I was planning to use the @IF(tailscale0)
macro (cupsd.conf), but I wasn’t able to get this working and didn’t dive much further into it.
With all those changes live, you can just restart CUPS (sudo systemctl restart cups
), and you should be good to go.
To add the printer on macOS, just add it as an IP printer with the hostname being the Tailscale hostname and then the /printers/<name of printer>
as the Queue.
The response is not a valid JSON response
. In addition, I also received weird symptoms I received was:
/wp-json
The response is not a valid JSON response
x-redirect-by: WordPress
After spending a long time researching this, I discovered about a hundred useless answers to why people received this. In retrospect, one of the common replies was that it was related to SSL. However, in my case SSL worked just fine, so that didn’t make a lot of sense. Except that it was right on the money, in a convoluted way.
See, the server was running a regular LAMP installation with Let’s Encrypt providing the SSL certificate. That part worked. I knew the certificate was valid etc. I then use Cloudflare for DNS/CDN/DDoS protection. Routing to the site worked just fine. However, by default, Cloudflare is set its SSL/TLS encryption mode to ‘Flexible’. This means that Cloudflare will gladly reverse proxy an HTTP end-point.
In my case, what happened was that Cloudflare would serve the content as HTTPS to the client, but use HTTP in the reverse proxy. As far as WordPress is concerned, the content is then served as HTTP, thus causing some internals to break. (On a technical note, what I would imagine happening is that WordPress simply ignore common flags, such as X-FORWARDED-PROTO
that could have helped here.)
The solution was to change the ‘encryption mode’ from ‘Flexible’ to ‘Full (strict)’, which means that Cloudflare will HTTPS to communicate to the back-end (while also validate the certificate). This is exactly what I wanted.
Hopefully this will save someone else a bunch of head scratching.
]]>(Do however note that this is different than how you grant a user SSH access in pfSense, where the steps do align with the outdated documentation.)
Here’s how you do it:
nologin
), and make sure to add a valid SSH key and to add the user to the group ‘remote_access’This of course assumes that you have SSH already enabled and remotely accessible. However, assuming this is true, you should now be able to login using the newly created user.
]]>Here’s the problem:
> $ sudo apt upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
5 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n]
Setting up raspberrypi-kernel (1.20210430-1) ...
Removing 'diversion of /boot/kernel.img to /usr/share/rpikernelhack/kernel.img by rpikernelhack'
dpkg-divert: error: unable to change ownership of target file '/boot/kernel.img.dpkg-divert.tmp': Operation not permitted
dpkg: error processing package raspberrypi-kernel (--configure):
installed raspberrypi-kernel package post-installation script subprocess returned error exit status 2
Setting up raspberrypi-bootloader (1.20210430-1) ...
Removing 'diversion of /boot/start.elf to /usr/share/rpikernelhack/start.elf by rpikernelhack'
dpkg-divert: error: unable to change ownership of target file '/boot/start.elf.dpkg-divert.tmp': Operation not permitted
dpkg: error processing package raspberrypi-bootloader (--configure):
installed raspberrypi-bootloader package post-installation script subprocess returned error exit status 2
dpkg: dependency problems prevent configuration of libraspberrypi0:
libraspberrypi0 depends on raspberrypi-bootloader (= 1.20210430-1); however:
Package raspberrypi-bootloader is not configured yet.
dpkg: error processing package libraspberrypi0 (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libraspberrypi-bin:
libraspberrypi-bin depends on libraspberrypi0 (= 1.20210430-1); however:
Package libraspberrypi0 is not configured yet.
dpkg: error processing package libraspberrypi-bin (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libraspberrypi-dev:
libraspberrypi-dev depends on libraspberrypi0 (= 1.20210430-1); however:
Package libraspberrypi0 is not configured yet.
dpkg: error processing package libraspberrypi-dev (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
raspberrypi-kernel
raspberrypi-bootloader
libraspberrypi0
libraspberrypi-bin
libraspberrypi-dev
E: Sub-process /usr/bin/dpkg returned an error code (1)
After wasting far too much time trying to debug this, I found that there was an easy solution: simply remount /boot
.
> $ sudo umount /boot
> $ sudo mount /boot
Now, I don’t know what the exact root cause is, but it’s likely something to do with the fact that /boot
is mounted as vfat and for whatever reason there is some kind of lock or similar that happens after you’ve run the device for some time.
In any case, hopefully this saves others the agony of trying to resolve this.
]]>Well. It wasn’t.
As it turns out, some NUCs do not like modern USB sticks. After trying two or three different USB sticks I had laying around, none of them were picked up by the system (neither as booting devices or for flashing the BIOS). Since I bought a few “good” USB sticks last year, I gave away or threw away the “crap” ones…the ones that the NUC would have accepted.
Since really needed to re-install the machine such that I could use it for some heavy Docker builds, I needed to come up with a workaround. This is when I realized that I could in theory install Debian using PXE boot. I was a bit hesitant at first, but then I ran across netboot.xyz. In short netboot.xyz is a boot image that allows you to select among a large number of distributions and tools.
As it turns out, configuring PXE booting on my pfSense ended up being a breeze. Within a matter of minutes, I was able to boot my NUC into the Debian Buster installer. As a bonus, I’m also able PXE boot VMs instead of having to download ISOs.
]]>