Reader

Welcome to Keyboard Vagabond, the blogging platform of the Keyboard Vagabond fediverse community for nomads and travelers! To get started, check out the About page and for background see Why Keyboard Vagabond.

To sign up, send an email with why you became or want to become a nomad to admin@mail.keyboardvagabond.com, this helps us keep spam out of the system. 😊

This is the Reader, see the latest posts from our users here!

from Programming

Thoughts of a functionally oriented CSharp dev creating an FSharp web app. Feedback appreciated.

I am currently writing a web site/application in FSharp. I could absolutely do it more quickly in a website builder, most likely, but I've been wanting to write FSharp for a while. I've done bits and pieces in coding challenges and I've been writing C# in a functional style ever since getting familiar with FSharp and functional programming. As such, I'm using this as an opportunity to learn. A lot of things look great on their promo pages, but how is it really to use in an IDE or editor, with authorization, logging, metrics, timeouts, etc?

Right now the portion that I've been spending too much time in is the data access layer of a media file upload. The basic idea so far, which I don't know is absolutely correct but how I did it years ago, was a request is made for an item or a batch that will generate metadata and ids, which are then used on the file upload. The file upload will update the Media db entry, save to disk, publish an event, etc.

But on the data access layer, I've been using both raw Dapper and DbFun. I tried a few other libraries and currently have my eyes on SqlHydra. The two FSharp projects have certain features that I like, namely:

Compile time type checking
They use TypeProviders, like FSharp's version of SourceGenerators, to read a schema and validate your runtime query. Thus, in a test, you can run any part of a module and the whole file will be read and fail to build. FSharp's tooling isn't very great, unfortunately. In Rider, you can set up your IDE with a connection string to the db and it will check your sql for you. It's not compile time, but it is something. This doesn't work with FSharp.

Separation of the Query and Execution
I'm not really a fan of the Repository Pattern. In a lot of ways, it makes sense, but it isn't really OO like it may appear at first, and it gets bloated very quickly with a new function for almost every variation. Then, what if you need multiple methods run within a transaction, but you're already creating a context in the method, then you need to change everything.

The functional way of doing this is to separate the data for the DbQuery and the running of the query. So, DbFun does this via partially applied function (each parameter can be applied later), like so (with explicit typing).

Note: the QueryBuilder is something that you can conifigure and register in DI on startup with your type mappings and such.

let findByUserId (userId:UserId) (query:QueryBuilder) : IConnection -> Async<User option> = 
    query.Sql<UserId, User option>("SELECT * FROM users WHERE id = @id",
	          "id",
		   Results.Optional) userId // assuming the value of userId can be mapped. The result mapping name may not be quite right, but that's the idea

So this function is a few things. The .Sql is typed and the types could be inferred, but it takes your query text, parameters, result mapping, and returns a function that takes the connection to execute it, or the runner.

The downside for my use case is that Async is the version of FSharp tasks that predates async/await in C#, but the current HttpHandlers on the api only take Tasks and there is some overhead in converting between them. Not the end of the world, but if it's unnecessary, I'd prefer to try something else first.

I'd like a way to get the generated DbQuery object and put that through a runner that returns a Task, but that is not the way this works.

So, for now I have a repository with some DbFun methods and some Dapper methods, but that signature is already hella bloated. The repository also handles instrumentation and other things with DI, but some things have gotten repetitive.

type IMediaRepository =
    abstract member FindAsync: id:IdentityId * fileName:MediaSystemFileName -> Async<MediaResponse option>
    abstract member FindTask : id:IdentityId * fileName:MediaSystemFileName * ct:CancellationToken -> Task<MediaResponse option>
    abstract member FindTask : slug:Slug * mediaId:MediaId * ct:CancellationToken -> Task<MediaResponse option>
    abstract member FindTask : id:IdentityId * mediaId:MediaId * ct:CancellationToken -> Task<MediaResponse option>
    abstract member FindTask: slug:Slug * filename:MediaSystemFileName * ct:CancellationToken -> Task<MediaResponse option>
    abstract member ListTask: slug:Slug * ct:CancellationToken -> Task<MediaResponse System.Collections.Generic.List>
    abstract member ListTask: id:IdentityId * ct:CancellationToken -> Task<MediaResponse System.Collections.Generic.List>
    abstract member ListTask: id:IdentityId * mediaIds:MediaId seq * ct:CancellationToken -> Task<MediaResponse System.Collections.Generic.List>
    abstract member UpsertTask: MediaUpsertRequest * ct:CancellationToken -> Task<MediaResponse>
    abstract member UpsertManyTask: mediaUpserts:MediaUpsertRequest seq * ct:CancellationToken -> Task<MediaResponse System.Collections.Generic.List>
    abstract member FindByOriginalFileNameTask : id:IdentityId * originalFileName:MediaOriginalFileName * ct:CancellationToken -> Task<MediaResponse option>
    abstract member SetUploadStatusTask: mediaId:MediaId * status:MediaUploadStatus * ct:CancellationToken -> Task<bool>

Every function has this structure:

member this.FindByOriginalFileNameTask(id:IdentityId, originalFileName:string, ct) =
            // withDbActivity is a helper function that I wrote
            let instrument = withDbActivity logger (nameof(findByIdentityIdAndOriginalFileNameQuery)) (Some findByIdentityIdAndOriginalFileNameSql)
            instrument (fun () -> task {
                let conn = connectionFactory()
                let guidValue = match id with | IdentityId guid -> guid
                let! res = conn.QuerySingleOrDefaultAsync<MediaResponse>(
                    CommandDefinition(findByIdentityIdAndOriginalFileNameSql, {| id = guidValue.ToString(); originalFileName = originalFileName |}, cancellationToken = ct))
                return Option.ofObj res
            })

My preference would be a bit more separation and to have an instrumented runner to handle the execution. But also, right now I'm doing the Dapper way of writing the Sql. The pros are that it's really fast to run since there's nothing the generate. The cons are maintainability. So, if a generated query could be cached and applied with new parameters, that'd be great, but it isn't always so easy when working with frameworks.

Another thing with generation is that I prefer to use upsert methods where I can, which in Postgresql gets really long:

INSERT INTO media.media(
    "id", "user_id", "slug", "system_file_name", "original_file_name", "full_uri", "is_deleted", "media_type", "sort_order", "note", "upload_status", "version")
VALUES (@Id, @UserId, @Slug, @SystemFileName, @OriginalFileName, @FullUri, @IsDeleted, @MediaType, @SortOrder, @Note, @DefaultStatus, @DefaultVersion)
ON CONFLICT (id) DO UPDATE SET
    "user_id" = @UserId,
    "slug" = @Slug,
    "original_file_name" = @OriginalFileName,
    "full_uri" = @FullUri,
    "is_deleted" = @IsDeleted,
    "media_type" = @MediaType,
    "sort_order" = @SortOrder,
    "note" = @Note,
    "version" = ... --weird version logic that I don't like  
RETURNING *; -- return the whole record's current state

The ON CONFLICT (UNIQUE KEY) DO ... isn't always supported in generators.

But anyway, SqlHydra, which I haven't tried implementing yet, looks like it generates the query and performs the execution, so I'll see how that works with metrics:

let getExpensiveProducts (db: QueryContextFactory) minPrice =
    selectTask db {
        for p in SalesLT.Product do
        where (p.ListPrice > minPrice)
        select p
    }

How would the separation look in OO?

I really don't want to re-invent the wheel and a lot of the libraries generate some type of DbQuery record that I'd love to be able to pipe into Dapper or something, but it might look something like the below. The goals of this approach would be for type generics on the query to carry through to the runner so that the Dapper implementation of things like 1 record or multiple records get handled for you.

One way that might look could be something like

// Note that I make changes to the structure later on in the post
record DbQuery<T> {
    public abstract string QueryText { get; }
	public abstract object? Parameter { get; }
	
	public virtual CommandDefinition ToCommandDefinition(option args like timeouts and cancellation token) =>
		new CommandDefinition(QueryText, Parameter, other stuff);

	public abstract Task<T> Run(IDbConnection connection);
}

record OptionQuery<T> : DbQuery<Option<T>> {
	public override async Task<T> Run(IDbConnection connection, CommandDefinition command) {
		// the command definition can carry Transaction information, so having it as an argument allows a query to be paired with others
		var record = await connection.QuerySingleOrDefaultAsync(command);
		return Optional(record);
	}
}

record MultipleQuery<T> : DbQuery<List<T>> {
	public override async Task<T> Run(IDbConnection connection, CommandDefinition command) {
		var result = await connection.QueryAsync<List<T>>(command);
		return result.ToList();
	}
}

record GetByUserId(UserId userId) : OptionQuery<User> {
	public override string QueryText = "SELECT * FROM users WHERE id = @id";
	public override object? Parameter => { id = userId.Value };
}

record ListThingByUserId(UserId userId) : MultipleQuery<T> {
	public override string QueryText = "SELECT * FROM thing WHERE id = @id";
	public override object? Parameter => { id = userId.Value };
}

class DapperRunner(Func<IDbConnection> connFactory) {
	public Task<T> Run<TQuery, T>(TQuery query, CommandDefinition? command) where TQuery : DbQuery<T> =>
	    query.Run(connFactory(), command ?? query.ToCommandDefinition());

	// run in transaction

	// execute multiple
}

class MetricsRunner(DapperRunner runner, SomeMetricsStuff metrics) {
	public Task<T> Run<TQuery, T>(TQuery query, CommandDefinition? command) where TQuery : DbQuery<T>
	{
		var queryName = typeof(TQuery.Name); // benefits of strongly typed queries
		var queryText = query.SqlText;
		// log metrics with this info
		return runner.Run(query, command);
	}
}

Then, instead of a Repository, you could have things be more functional, like a “module” for the queries.

static class UserThingQueries {
	public record GetByUserId(UserId userId) : OptionQuery<User> {
	    public override string QueryText = "SELECT * FROM users WHERE id = @id";
	    public override object? Parameter => { id = userId.Value };
    }

    public record ListThingByUserId(UserId userId) : MultipleQuery<T> {
	    public override string QueryText = "SELECT * FROM thing WHERE id = @id";
	    public override object? Parameter => { id = userId.Value };
    }
}

Now, as far as the consumer goes, it would need to have a runner injected, or you could again put it behind a repository, but that would defeat the purpose a bit, though testing might be easier since you could make an interface for just those particular queries that need to be run. There are lots of options with programming.

But I don't know, going back to my FSharp repo, one thing I would like to have done is to have my instrumentation not need to be copy-pasted per method. It's not a big deal, but if I'm not using strongly typed query objects, then metrics and logging becomes more difficult. It's nice to have a name for the query so that you can see easily in the code where it's getting used. The libraries that I have don't really have that, which is why my withDbActivity helper takes a QueryName and QueryText option as parameters.

I still don't know where I want to go with things like SqlHydra, DbFun (I may have to drop it if I want to use tasks natively), or the currently implementation of writing Dapper in a repository. I don't like how big the repo is getting already, and trying to implement this same thing in FSharp feels clunky and I keep getting stuck on various syntax elements. Or I could copy-paste what I wrote into my local llm running on a Radeon 6800XT just because.

What I'm not wild about in the repository is the tight coupling of a query itself and the running of it. Separating those makes it easier to combine queries in a transaction, such as something like below.

let updateOneThing = UpdateOneThingQuery(blah)
let updateAnotherThing = UpdateAnotherThingCommand(blah)
runner.RunInTransaction(fun connection transaction -> task {
    let cmd1 = updateOneThing.ToCommandDefition(transaction = transaction)
	let cmd2 = updateOneThing.ToCommandDefinition(transaction = transaction)

	let! result1 = runner.Run(updateOneThing, cmd1)
	let! result2 = runner.Run(updateOneThing, cmd2)
	return result1, result2
})

But writing this out is a bit clunky, so perhaps a different approach to reaching across things everywhere would be to put the transaction and other parameters. Things are crossing weirdly, so maybe if it was restructured to the below. This feels a lot more natural.

let updateOneThing = UpdateOneThingQuery(blah)
let updateAnotherThing = UpdateAnotherThingCommand(blah)

runner.RunInTransaction(fun connection transaction -> task {
    let cmd1 = updateOneThing { with Transaction = transaction }
	let cmd2 = updateOneThing { with Transaction = transaction }

	let! result1 = runner.Run(cmd1)
	let! result2 = runner.Run(cmd2)
	return result1, result2
})

so the CSharp version of the base would look something like below. This feels a bit more natural, and is what I've seen in the source code for DbFun and other libs.

record DbQuery<T> {
    public abstract string QueryText { get; }
	public abstract object? Parameter { get; }

	public DbTransaction? Transaction { get; } = default;
	public CancellationToken CancellationToken { get; }
	
	public virtual CommandDefinition ToCommandDefinition() =>
		new CommandDefinition(QueryText, Parameter, transaction: Transaction, cancellationToken: CancellationToken);

	public abstract Task<T> Run(IDbConnection connection);
}

record OptionQuery<T> : DbQuery<Option<T>> {
	public override async Task<T> Run(IDbConnection connection) {
		var record = await connection.QuerySingleOrDefaultAsync(ToCommandDefinition());
		return Optional(record);
	}
}

// consumer
var getByUserIdQuery = new GetByUserId(new UserId(12345));
var listThingByUserIdQuery = new ListThingByUserId(new UserId(12345));

var (getResultOption, listResult) = await runner.RunInTransactionAsync(async (conn, txn) => {
	var getQuery = getByUserIdQuery with { Transaction = tnx };
	var listQuery = listThingByUserIdQuery with { Transaction = txn };

	Option<User> getResult = await getQuery.Run(conn); // the run function only exists to handle dapper's Query, QueryMultiple, QuerySingle, etc and could just be done here
	List<Thing> listThings = await listQuery.Run(conn);

	return (getResutl, listThings)
})

Most FSharp libs use immutable records and such can also set the SqlText and Parameter using with, but you'd have to make your own QueryName if you want to try and log something like that. The custom classes provide really just 2 things: A name for the query for logging and an out-of-the-way mapping from Dapper's result to a List or Option or whatever that type is.

How and whether to incorporate into FSharp

I could do something similar and port this structure over to FSharp syntax, but then I wonder how much it's worth it to wrap a SqlHydra query generation into a query type, such as

// normal way:
let getProducts (db: QueryContextFactory)  =
    selectTask db {
        for p in SalesLT.Product do
        select p
    }

//wrapped way
type ListProductsQuery(storeId: StoreId, db: QueryContextFactory) =
  // inherit a base type
  member _.Run(already doesn't work with dapper connection - bad abstraction?) = 
	  selectTask db {
	        for p in SalesLT.Product do
			where p.StoreId = storeId
	        select p
	    }
  
let getProductsByStore (storeId: StoreId) (db: QueryContextFactory) = ListProductsQuery(storeId, db)

This is how SqlHydra does transactions:

let completeOrder (db: QueryContextFactory) orderId = task {
    use! shared = db.CreateContextAsync()
    shared.BeginTransaction()        

    // Update status for order
    do! updateTask shared {
            for o in dbo.Orders do
            set o.Status "Complete"
            where (o.Id = orderId)
        } : Task

    // Write to audit log
    do! insertTask shared {
            into dbo.AuditLog
            entity { Message = $"Completed order {orderId}"; Timestamp = DateTime.UtcNow }
        } : Task

    shared.CommitTransaction()
}

As for logging named query metrics, I see that SqlHydra's QueryContextFactory can take a custom logging function, but I wouldn't be able to use those typed query names. I could probably get over it haha. I'm looking at 3 different ways of doing things and trying to merge them in my head.

At least with my helper function, I have the ability to put it anywhere, it doesn't have to fit into a specific style, so I could do

withDbActivity logger "Set Order Status Complete" (sql:None) (fun () -> 
    do! updateTask shared {
            for o in dbo.Orders do
            set o.Status "Complete"
            where (o.Id = orderId)
        } : Task
) 

But this means that the particular block for update status and having that be attached to a query name won't be shared across other uses. It'll work in that I could copy-paste to find the part where it's slow, but I wouldn't necessarily see other usages of the same query.

I'm not really too sure what the best approach is. I do really like Dapper and I don't mind writing SQL queries. However, in larger projects with a lot of area and changes, keeping things up to date really only happens with integration tests, and assuming people on other projects find them or that nothing gets missed, which is why microservices are usually split by teams, communicate via message bus, and have their own DBs. It's all primarily, yes for scaling, but to make sure teams don't step on each other.

But that's not the scope for this project. I could probably use Dapper, maybe Dapper.FSharp or SqlHydra for some things where the sql generation makes sense, and use raw Dapper where it makes sense. If I have two flows for doing metrics, then so be it.

I may wind up having a few different ways for each method, but try to make it somewhat invisible at the consumer level? Or I could just pick one (Dapper) and just have one thing. KISS and all that.

But that's part of the learning. I'm currently fumbling around for what feels “right” or “natural” in this current environment. Even in a CSharp repo, I'd want to steer away from Repositories because the typically become God classes, primarily due to pairing the DbQuery and its execution.

I am very open to feedback. WriteFreely blog posts are the easiest to find on the fediverse, but the mastodon handle for this blog is @programming@blog.keyboardvagabond.com and the KeyboardVagabond mastodon link is https://mastodon.keyboardvagabond.com/@programming@blog.keyboardvagabond.com/116525648156410356

#fsharp #csharp #softwaredevelopment #dotnet #programming

 
Read more...

from Michael DiLeo

#art #museums #japan #naoshima #teshima #travel #asia #digitalnomad #destinations #travelphotography

click on the photos to see the full size

Day 1

I took a few days to visit the art islands of Naoshima and Teshima. As it was told to me, these islands knew that they were declining some decades ago and sought to revitalize themselves by bringing in artists to create exhibits and art houses. It seems to have worked. There were plenty of people showing up by ferry, which is cheap and only 700-800 Yen.

When I arrived at the Naoshima port, I stored my luggage and walked across the street to one of the ebike rental shops and rented a bike for the day for 1,500 Yen. I could have stored my luggage with one of the shops for 500 Yen with ebike rental instead of putting in the storage for 800 Yen, but whatever. I then went to the southern part of the island to check out some of the installations. Unfortunately for a lot of the museums, photography is not allowed inside, but while I was waiting for my 1PM New Naoshima Art Museum slot, I stopped by a cafe overlooking the straight.

view from the cafe windows overlooking the water will small islands nearby You can sit in the sofa chairs for 500 Yen. On my second trip to the cafe later, I sat next to someone doing a small painting of the scenery.

small coffee set with tiny milk cup on brass painting plate and stirring spoon I thought that the little cup they gave for the milk was really cute and the milk dripping down the side looked cool. It was so tiny!

From there I went around the corner to the museum. You couldn't take pictures of the inside, but at the cafe, they had a nice view. near the cafe, the view through the narrow concrete hallway to the outside view of the water, similar to the cafe

Around the museum are some other works as well: large metal arch with a couple walking away small beachside garden are with stones and arrangements

There's actually a sign saying not to enter the area, but I didn't say anything to the guy. 😂 Above this portion was the rest of the hill that it was in, with a sitting area on the slope so you can look out at the ocean.

click for full size There are two giant pumpkins on the island, one by the port and the other on the southern/eastern end. They were quite popular photo spots!

This is one of the featured pieces at the Bennesse House, which each of the phrases lighting up in different colors. Each phrase is some verb + and live/die. click for full size

Another one of the outdoor exhibits. This part is inside, but there was another another pond area filled with these. A plaque said that they were originally displayed at an exhibit in France decades ago and that the balls are made from industrial slag refuse. I took this photo because it made me think of raytracing demos in computing 🤓. click for full size

There's also a chunky otter living in the water.

And near the guest house I was staying in was this....ball structure? click for full size There is an area inside that you can go and it may have been bike parking or something?

Day 2

The second day on Naoshima was a lot more tranquil. I mostly walked some paths, visited a few art houses, and hung out in cafes. You really only need to stay one night in Naoshima and leave in the morning at most, but it was nice to have a slow day.

click for full size

click for full size The sign reads: > This <gate> is built in the Buddhism-Medical Gate style with tiled roofing. The inscription “Auspicious day, April 1701” on the ridge-end tile suggested it was built at the same time as the main hall and reception hall. The plaque was handwritten by Takatsuji Chunagon Yonaga. It features the ex-Naoshima Lord Takahara's family crest & ship's seal. It was donated during the Anei era by Takahara Jirobei Toshisada, a samurai of the Kyushu Kuroda clan, a retainer of the ex- Naoshima Lord Takahara. By Naoshimacho Board of Education

After that I found a nearby trail and had to explore it! click for full size And inside are two figures. I couldn't get a good photo due to the reflection, but here's one of them. click for full size

click for full size

You can hear the birds chirping here, despite this place being so close to town.

(If the video doesn't load you can get it here). I'm still trying to find the best way to host video clips and such.

And at the top is a shrine with the guardian doggos. click for full size

Who's a good, handsome guardian deity? You are!

Further on I found the local community center. In the small building to the side I heard some women talking. They seem to have a lot of local events and activities for the locals to attend here.

you may need to scroll for the vertical images The building is pretty cool in its design. It has a large mount of dirt and moss on the side of the structure to help keep it cool and the top part is an air intake to bring in cool air.

Near one of the other art buildings I saw some of these interesting pieces. Kaneki-kun, is that you? I have no idea what this is.

More Walking Around

A lot of the old houses on the islands have really nice gardens that you can often see or take a peek in to. This one stood out to me from the old style Japanese door / gate opening into a brightly lit garden. click for full size

Oh look, a random path up a hill. Where does it go? click for full size

An almost forgotten little shrine? Cool! And golden hour is coming out. Nice. click for full size

A random shrine path next to a cement mixer? Sure, why not? click for full size

I eventually made my way a bit further south of where I was staying. I walked about 20-25 minutes to another settlement section near a port. It wasn't the coziest of places to chill, but I enjoyed the trainquility a bit. If the embed doesn't work (I'm working on how to show short videos), see the post on Pixelfed.

click for full size

From here I went and ate some Ramen, then walked back. And found another path to walk up click for full size

Looking back from the top here gives a good view. click for full size

This shot came out quite nice. And the cat wouldn't leave me alone. I saw it chilling on the steps on my way up, but as I was leaving, it wanted attention, food, or both, and wouldn't let me leave. So, I had to give scritches for a while. Then I'd take a few steps and it'd run in front of my legs for attention. Leaving took a bit 😂. click for full size

Day 3 – Teshima

I left early the next morning and some of the other guests were kind enought to chase me down at the bus stop and give me my phone charger. Thank you, kind strangers! I took the passenger boat to Teshima and rented an e-bike here and began wandering around. I was able to drop my luggage off at the same place where I rented the ebike, right by the port. There are other places nearby, but this one is literally right outside the door.

First stop was a house turned exhibit. Photos were allowed in the outer areas. I thought the cafe out front was a cute little spot. One thing that I've noticed in Japan is that there isn't as much outdoor seating, so it was nice to find it where I could! One of the featured pieces inside was a type of collage arrangement of a famous Shunda artwork, which was ealy Japanese erotic artwork. The had a book there of some of the famous works, including the original of the piece on the site.

I then went around and found some out of the way spots, including this shrine up the top of a path. Behind the shrine was another gate that opened out to the view. It feld like almost transitioning into another space, so I made sure that I went through the tori gate in both directions so I wouldn't get stuck in the spirit realm or something.

After that I stopped by The Factory, an old factory turned cafe, for some food and coffee, but mostly a place to sit. It had a cool vibe to it. click for full size click for full size

From there I began making my way to a portion of town more in the center of the island, stopping by the interesting sites along the way. click for full size

There was a sign that seemed to say this was important or something? It looked like a house though? click for full size

Arriving into the small town. It seems like it must be local politics season. A lot of towns have posters like the blue one back there. click for full size

A little further up the way is another restaurant with a nice outdoor area. Too bad I already ate! click for full size

Ok, let's continue up the path. I was walking and didn't mind since I had some time before I needed to get to the Teshima Art Museum, but I probably should have taken the bike. These old buildings stood out to me. They look not-quite dilapidated, but being reclaimed by nature. I'm not too surprised given the elderly distribution of the islands. click for full size

Next up, the Guardians of Gains! These dudes are jacked! Only the fittest are allowed in, thus, I could not enter. click for full size

Islands man. Fortunately, I did not see any wild boar.

And now I'm arriving to my objective: Le Forêt de Murmures click for full size This is an art installation by a Christian Boltanski, a French artist. In the forest, there are hundreds of bells with plastic pieces hanging down them with the names of lost loved ones. It was an incredibly somber and pensive moment that I tried not to let be ruined by constantly walking through cob webs. Fortunately, I was able to find a place to sit, though I had to leave earlier than I would have liked for the museum entry. This was my favorite exhibit out of all of them.

There was something somber and comforting in seeing so many names. Sad, because so many people with lost loved ones, but comforting in knowing that you're not alone, and warm in knowing that these people were remembered and their names written down. At the same time, the exhibit features natural wear from the environment. The bells will wear, the names will wear and wipe away (though they are collected). For now, the memories of loved ones persists and it made this place feel sacred. click for full size

If the embed doesn't load, the video clip is here.

It was also caterpillar season, so there were tons of these little guys hanging down, beginning to make their cacoons. I wanted to get a video, but my phone can only do manual focus on camera mode, not video mode. Because technology. click for full size

I eventually made my way back into town and rode down the hill toward the Teshima Art Museum. click for full size

Next I stopped at a little flat area above the museum with a food truck while I waited for my entry time. The museum is the white blob you see toward the bottom. Photos are not allowed inside, so here's the website for the Teshima Art Museum. In case the embed doesn't load, here's the link.

The space itself echoes sounds very much. When you go in, you are advised that this is a no speaking zone and no cameras are allowed. It's very tranquil and relaxing in a way. You hear a lot of sounds from the outside, like the birds, and when people are in certain places you can hear the quiet ruffling of their clothes as they move, even if they are being quiet. Unfortunately, though, it was about to rain and I had other places that I would have liked to have gone. But I was told by a travel friend that a good thing to do if you can is to reserve a hotel on the island and stay at the museum at night or in the evening because everyone else will be gone and you won't be forced to leave. You'll have the space all to yourself. I couldn't do that, but I'm glad that I got to go.

From there I went down the hill an saw Les Archives du Cœur, another no camera piece. You enter into a dark hallway with mirrors all along the walls and a single incadescent lightbulb suspended in the middle, lighting and fading with the elevated sounds of the heartbeats that are recorded. Outside of the exhibit area you can see whose heartbeat is playing; you can also have yours recorded as well. The sound inside ranges from flowing fluids to anxious, elevated pounding as the sound reverberates all around you.

From there I began riding back as the rain started coming in. I decided to continue on to some other interesting exhibits and arrived quite wet. I was looking forward to seeing this exhibit just from how cool it looked. It's three old noodle making machines interwoven with red threads. Memory of Lines
2025 Three somen noodle-making machines, once used in Kou, Teshima, are
installed and interwoven with red threads to create a spatial composition. These machines, passed down and used for over 60 years, were handed over
for this work as cherished items-no longer needed, yet too precious to discard. Through the voices of the people and the ol jects left behind, this installation
weaves together the memories of daily life and the land that has been
inherited on Teshima, carrying them gently into the future. I'm not sure if the red threads are related to the threads of fate? But I liked how the strings wrapped around portions of the machines, almost like they were being restrained. click for full size

I noticed on this machine that it looked like they polished and oiled the gears, while the rest was still dusty and aged. It's hard to make out because my phone loves over-exaggerating reds. The threads even go up into the attic space. click for full size

Just outside the coast and Udon shop that I went to was this very pretty looking house.

And the final exhibit of the day. I've forgotten the notes on these, but each of the children sculptures have a different geo-coordinate on the backs of their shirts.

And finally, it's-a-me. I wanted to get a closer photo, but it was raining again, so this sufficed.

I finished off the night in Uno, a port city on the mainland, after taking a passenger boat directly from Teshima. I found a cool looking Izakaya, a bit dirty, super smokey with people smoking inside, and run by a funny, friendly uncle and his wife. I asked him about the baby face and he said that it was his grandson. He seems to be very pround. Just look at those massive cheeks!

 
Read more...

from Michael DiLeo

I am finishing up spending a month in the small town of ~7,000 people called Kothira, which is also home to the oldest Kabuki theater in Japan! I came here for cherry blossom season and to continue my burn-out recovery away from huge cities. I can't really say if Kotohira was too small or not. It possibly was, but I made some good connections with the people staying and volunteering at Kotori Coworking and Hostel. I'd like to share some photos from that time.

Shrines of Kotohira

Walking up about 700 steps, through a narrow street of shops and tourists, you'll enter the shrine complex. It's not all one set of stairs, so it isn't as hard as it sounds. There are plenty of stops and places to see flowers and trees along the way. You also get a good view of the valley and mountains on the way up!

I thought that this section here was quite nice. The trees provided both shade from the sun, but the humidity made it even warmer. You tend to see clouds and fog around the tops of the mountains here! a portion of the stairs forming an L before continuing up

And look at this tree! I took it with the half zoom camera, so the proportions are a bit off. big tree with moss on the bottom

Some of the stairs looking down at some stairs with some tourists walking down

Here's the side of the main complex at the top.

Toward the bottom of the mountain, the Cherry Blossoms were just starting to come into bloom. white cherry blossoms pink cherry blossoms

And through the main gate at the front, you can see the whole valley view overlooking the valley with mountains in the background. The view is through the wooden front gate building with a white banner above the mountain view

Ritsurin Garden, Takamatsu

In another city, Takamatsu, about a 1 hour train ride from Kotohira is the famous Ritsurin Garden.

You might get lucky and get a cool train! blue painted train with yellow lightning or something

In addition to the main house and historical portion, there are a lot of small places to sit and enjoy the nature. You wouldn't even know you're in a city. small stone Japanese hut overlooking a pond

As the season was just beginning, tourists were beginning to pour in. It would get busier later. cherry blossom trees

white cherry blossoms across a stream. A couple has a pink umbrella to protect from the sun

a portion of the castle complex across the pond with Japanese styled trees looking like large bonsai

a closer view of the castle across the pond

walkway around the pond

on a small hill overlooking the pond with the building on the other side. A Japanese style bridge spans a narrower portion of the pond

gardeners trimming plants with large Japanese fan looking things in the ground for decoration

a pond full of reeds

I didn't capture half of the beauty of this park. I wish I had done better at taking photos. It's absolutely worth the trip!

In Takamatsu there are a few under-the-street crossings. This one had some cool decorations in it! decorative reflective wall with engraved constellation animals

artistic 3D relief of stylistic plants and mountains

That's all for today! I have many more photos to show of Cherry Blossoms for the next post!

#japan #travel #travelphotography #shikoku #kotohira #takamatsu #japanesegarden #garden #plants #flowers #cherryblossoms #sakura #nomad #digitalnomad #blog #photoblog #asia

 
Read more...

from Michael DiLeo

I recently finished a week long trip to Singapore to catch up with an old friend. When I told people that I was going there for a week, several people gave me a grimmaced look, and told me that I shouldn't stay there for longer than three days, and that I would be bored. I can safely tell you that I could have easily spent more than the week that I did, and I could happily do several weeks as a traveling nomad (especially if I was working).

Singapore does really well some of my favorite aspects of cities: greenery, public space, music/drinks/food, and discoverable nooks and cranies, as well as good public transport.

One of the first things that you'll notice when you begin walking around Singapore is that the city is very green. In fact, it's a Green City, and the only one in Asia, according to a sign in the Botanical Gardens. Back around 1971, they decided to become a Green City and began planting 10,000 saplings per year until 1990. As a result, the city is incredibly green, with a lot of natural shade in addition to the awnings and balconies that are required in a lot of spaces to provide protection from the sun and rain. Let me quickly share some photos of the greenery around the city and show off some of the cool architecture of the buildings.

Walking the City

sky scraper with interesting layered, browng "layers" with a balcony level with trees This building is very fun to look at with the curvy layers of brown, along with the balcony level with trees and bushes. There are a lot of well architected buildings here to keep things interesting. It's not like cities that are just boring glass towers. It gives a sense that the city cares about how it looks and how the designs make life feel here. It's very interesting and fun to experience yourself and I really appreciated it.

Parliament government house in the colonial style building with the former supreme court domed building in the background This is the Parliament builiding and the domed building in the background is the former Supreme Court that is now a National Gallery. The National Gallery was a good stop to visit, though I will say that museums in Singapore are a bit pricey for what you get, but I did enjoy them and don't regret it.

Near the government buildings is this bench at a children's park, with messages about kindness and generocity. You see messages like this all over the city. bench with the words Kindness Corner and messages of kindness and generocity In case you can't see it, some of the messages say “Listen without judgement,” “Gentle words heal,” “Give encouragement,” etc, but also in multiple languages. Singapore has English as an official language, but is very much multi-lingual and teaches multiple languages in the schools, with English now as the primary.

bridge over river with trees and person lying down on a bench This was right outside the Asian Civilizations Museum that featured goods from a recovered ship wreck in a very cool, wavy presentation, like below! This day was a calm, not too warm day and people were enjoying the weather and shade before it got warmer. I really like the white pedestrian bridge going over the river. shipwrek pottery pieces with model ship arranged like ways in the ocean I love this arrangement of pottery to look like the ocean. The model ship is a model of the ship that they found and includes the ropes used to hold the wooden planks together, as they didn't use nails. It was so cool!

Ok, back to the green. Oasia Hotel tower that is totally covered in plants all up the walls This is such a cool building, literally covered in plants along the walls from top to bottom!

Botanical Gardens

I'm a huge fan of Botanical Gardens, so here are some photos from there.

photo of iguana or something This was the first thing that I saw in the Botanical Garden. It was trying to hunt for some food.

One thing that I noticed around the city is that there are a lot of sculptures in various places. I really liked this scupture of swans(?) a though they were taking off from the pond. I recall from one of the sign posts that, at some point, some of the plants were dying for an unknown reason so they drained the pond and discovered turtles eating the roots. They were relocated and plants started thriving again. The draining and refilling of the water also allowed for some seeds at the bottom to germinate. pond with swan sculpture in the middle

I love gazebos! I didn't notice it at first, but there was a couple getting married or taking wedding photos here. gazebo with barely visible couple inside

This is an area with a restaurant and coffee stand. I don't want to spoil too much of the gardens, but this is also what a typical walk looks like, with plenty of shade. canopy covered paved path with shops in the background

Now, probably for one of the things you were expecting to be first: the Sky Garden!

Sky Garden and Flower Dome

These amazingly lovely tree looking towers are structures specially for allowing plants to grow up them. The tickets to go up are a bit pricey, but you know you're going to do it.

sky garden tower trees with sky bridge, an arial walkway, going around them

These statues of the rabbit and dog holding cameras to take pictures are so cute! bipedal rabbit in dress and dog in suit with cameras held chest high, taking photos

Inside the flower dome, which is like a giant greenhouse tome with a mountain-looking waterfall covered in plants and featuring a Jurassic Park exhibition that I thought would be lame but was super-cool and had me going “dinosaurs!!!!!” photo of me with waterfal dome behind me and brachiosaurus dinosaurs One cool thing about the dome is the mist from the waterfall keeps the environment quite cool and it's a good bit warmer at the top. I really liked the dinosaur exhibit. They had good animatronics, but also some interesting information about the dinosaurs, including models of some of the smaller ones. It was a super fun stop.

I absolutely loved these flower pedal art works in this fountain. a fountain in the sky forest area with metal flowers with red stamen

Ok, now I'm getting really distracted, but there are a lot of good parks and walking trails to explore with great views. Here's one on the west side of the city center. You can find the trail near Henderson Waves on Google Maps.

Henderson Waves and trail

I'm not sure if this video will load, I need to find a good hosting option. Listen to all of the insects and birds. Walking the trail with sounds of birds and insects It was actually a bit warmer and a lot more humid here. There is a sky walkway, but it was damaged from some heavy rains, so ground trail it is!

walking path with trees on both sides

I love benches. Photos of them make places feel more cozy and welcoming. Also, Evil Building ™?

benches with iron fence overlooking the distance with curved building spires in the distance that look like an evil building

More of Henderson Waves. I didn't get the best photos, but it does actually look like waves and these semi-circle spots has areas to sit and relax in the shade if the sun is already behind you.

Henderson Waves walkway with first semi-circular dome. the railing does a slight zig-zag for an interesting look and the walkway is in the canopies of the trees

Moar Evil Building! a closer photo of the evil building. There are at least 7 differently sized towers curving toward each other like a claw

Later that night, Atlas Bar/Hotel

Later that night I went with a local friend to go out for some drinks and take a peak at the Atlas Bar and Hotel. It's one of the top-rated spots in town and the building is designed in 1920's modernism / Art Deco style. It's so cool! I'd love to go back and actually get a drink there.

But first, some more cool statues. statue art of two girls in pink dresses with umbrellas and male figures in the background

And now the front of the Atlas Bar with the saoring Crane(?) outside the first floor of the tower. And art deco latice work covers the front of the building with the statue just left of center in the image, also in art deco style

This whole building is so cool. I wish I had better photos, but there are statues at the top of the building, on the side, holding the world at waist level. the whole Atlas Tower building

The inside is SO COOL! The entire thing from floor to cieling is a work of Art Deco. I could stay there for hours just looking at everything. inside the bar area of Atlas Hotel

Walking around and seeing murals

After that we hit some of the bar streets and music scenes in Korea Town and nearby areas. Here are some of the murals and cool buildings that I got.

This one is like a glimpse of how things looked when Singapore was still much smaller. I like how they incorporated the actual shutter windows into the painted house's windows. shops with second floor walls painted to look like a few of smaller buildings from Singapore's earlier days

There is so much going on here, I love it. a bar building with blue and red lights on the outside with paintings of birds and manequins sitting in chairs waving, along with other small figures hanging from the top floor balcony railings

A close-up of the message. painted message on the side of the same building saying Art should comfort the disturbed and disturb the comfortable

These were all at Haji Lane, where there's a ton of bars for nightlife. We even saw some younger folks gathered around an outside DJ and dancing. It was a very fun and happy vibe.

Here's another mural I found in China town mural depicting family life with children on a table, a women sewing, and others on the far left doing daily work

And then finally the next day we went to the beach at Sentosa, which is like an island with beaches, theme parks, etc. art sculpture of a giant monkey-like animal made of wood panelings lying on its side and looking at the corner of a shipping container buried in the sand There are a lot of cool pieces of art work at the different beaches here. It was worth the trip, not just for the beach, but there are a lot of other activities on the island.

Unfortunately after this, it was time for me to move on, so of course I stopped by the Jewel at the airport on the way out. I wish I had had an extra hour or so to enjoy a coffee and the vibes there, but I had a few minutes.

selfie of me overlooking the jewel interior with the outside of the dome lined with multiple levels of walkways and trees, with the center waterfall falling into a funnel pit

I just went chronologically through my photos, but ther eare some things that I may have missed.

Eating out and other notes

Singapore can be quite pricey, but you can also each cheaply if you want, for $6-$8 or less, at Hawker Centers. These are covered, open air centers with lots of food stalls. I didn't get any photos, but that's where I recommend for a variety of good food, cheaply.

Also, the public transit is great and spans a lot of the city. There are bike lanes, but it's still pretty car-centric in design, though owning a car is super expensive so that helps cut down on congestion. But, if you do take a subway, don't bring a durian with you! subway sign saying no eating or drinking, no smoking, no flammable goods, and no durians

Singapore has a lot of smaller neighborhoods with plenty of opportunities to explore nooks and cranies, local areas, parks and green space. Sure, you can hit the main spots in a few days, but I'm of the opinion that if you are a traveler and have time, leaving after a few days would be a lot of missed opportunities. There's more than meets the eye, you just have to peak behind the curtain.

#singapore #travel #digitalnomad #travelphotography #urbanism #cities #cityscape #asia #bars #beach #slowdown

 
Read more...

from Programming

I recently caused myself a bit of a minor issue by installing some updates on the Keyboard Vagabond cluster. It wasn't a big deal, just some version number updates from a project called renovate that automatically creates pull requests when package versions that you use get updated. Doing this did trigger a restart on the redis cluster, which means that different services may need to be restarted because their redis connection strings get stale. I had restarted the piefed-worker pod, but the update didn't seem to stick and I didn't realize it.

I noticed the next morning that I wasn't seeing any new posts, so I figured the worker was stuck and, sure enough, I checked the redis queue and saw it stuck at ~53k items.

image

Piefed will stop publishing items to the queue when the redis queue reaches 200MB in size and return 429 rate limit http responses.

Solution: restart and then processing started, but I was wondering about pod scaling.

The thing about scaling the worker is that piefed scales internally from 1-5 workers, so vertical scaling is preferred over horizontal, especially since redis doesn't ensure processing order like Kafka does, so by adding a new pod, I could create a situation where one pod pulls a post create, the next pulls an upvote, but the upvote gets processed before creating the post. So normally, you wouldn't want to scale horizontally, but there is a use case for doing it: something gets stuck.

In the past, the queue had blown up due to one or more lemmy servers going down and message processing stalling. I solved that at the time with multiple parallel worker pods so that at least some of the workers would likely not get stuck. Doing something similar could help in this current case, where the first worker wasn't processing queues. Now, the ultimate item on the to-do list is that I should make that pod return redis connectivity as part of the health check so that it'll get restarted if redis fails. (I'll be doing that after this blog post)

My up until today current version of horizontal scaling was on cpu and memory usage, but I never hit those limits, so it never triggered. I was working with Claude on it when it introduced me to KEDA, Kubernetes Event Driven Autoscaling. https://keda.sh/. This looks like what I need.

Installation was pretty simple, https://keda.sh/docs/2.18/deploy/, you can use a helm chart or run kubectl apply --server-side -f https://github.com/kedacore/keda/releases/download/v2.18.3/keda-2.18.3.yaml and it takes care of it. I had Claude create a kustomization file:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: keda-system

resources:
  - https://github.com/kedacore/keda/releases/download/v2.18.3/keda-2.18.3.yaml

patches:
  # Custom patches to change the namespace to keda-system to be consistent with my other namespace patterns
  - path: patches/clusterrolebinding-keda-operator-namespace.yaml
  - path: patches/clusterrolebinding-keda-system-auth-delegator-namespace.yaml
  - path: patches/rolebinding-keda-auth-reader-namespace.yaml
  - path: patches/apiservice-external-metrics-namespace.yaml
  - path: patches/validatingwebhook-namespace.yaml

And the patches aren't necessary, but they look like the below just because I want that namespace.

apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  name: v1beta1.external.metrics.k8s.io
spec:
  service:
    namespace: keda-system

After that, there's a scaledobject in Kubernetes that you can configure:

---
# KEDA ScaledObject for PieFed Worker - Queue-Based Autoscaling

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: piefed-worker-scaledobject
  namespace: piefed-application
  labels:
    app.kubernetes.io/name: piefed
    app.kubernetes.io/component: worker
spec:
  scaleTargetRef:
    name: piefed-worker
  minReplicaCount: 1
  maxReplicaCount: 2 
  cooldownPeriod: 600  # 10 minutes before scaling down (conservative)
  pollingInterval: 30  # Check queue every 30 seconds
  advanced:
    horizontalPodAutoscalerConfig:
      behavior:
        scaleDown:
          stabilizationWindowSeconds: 600  # Wait 10 min before scaling down
          policies:
          - type: Percent
            value: 50
            periodSeconds: 60
          selectPolicy: Max
        scaleUp:
          stabilizationWindowSeconds: 120  # Wait 2 min before scaling up
          policies:
          - type: Pods
            value: 1
            periodSeconds: 60
          selectPolicy: Max
  triggers:
  - type: redis
    metadata:
      address: redis-ha-haproxy.redis-system.svc.cluster.local:6379
      listName: celery  # Main Celery queue
      listLength: '40000'  # Scale up when queue exceeds 40k tasks per pod. Piefed stops pushing to redis at 200MB, 53k messages the last time it got blocked.
      databaseIndex: "0"  # Redis database number (0 for PieFed Celery broker)
    authenticationRef:
      name: keda-redis-trigger-auth-piefed

This will scale when 40k messages are in the queue, which should only happen when something isn't getting processed, and will scale up to a second pod only. So, in the event that a pod gets stuck, at least things should gradually be kept moving.

When I got to this point, I decided to implement my restart idea, but Claude gave a different suggestion to use the Celery worker's retries, so it added

- name: CELERY_BROKER_CONNECTION_MAX_RETRIES
  value: "10"  # Exit worker after 10 failed reconnects → pod restart
- name: CELERY_BROKER_TRANSPORT_OPTIONS
  value: '{"socket_timeout": 10, "socket_connect_timeout": 5, "health_check_interval": 30}'

A new startup probe, sure why not

startupProbe:
          exec:
            command:
            - python
            - -c
            - "import os,redis,urllib.parse; u=urllib.parse.urlparse(os.environ['CELERY_BROKER_URL']); r=redis.Redis(host=u.hostname, port=u.port, password=u.password, db=int(u.path[1:]) if u.path else 0); r.ping()"
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 30

and it changed a few thresholds for checking liveliness, which I thought looked fine.

The current state of things is that once the number of records started going down, other servers started federating, which is the spike you see in the graph. There are now 3 web pods and 2 worker pods, vs the typical 2 web pods and 1 worker pod.

The good news is that after scaling out, the total max processed gradually rose from ~1.5k per minute to just under 3k per minute. Once the records fall below 40k and other servers are back to normal federation, things will go back to more normal levels, as a single worker is fine unless things stop and get backed up.

Good job on piefed for returning 429s to keep things from getting too crazy!

Here are the requests coming in. You can see big spikes once we stopped returning 429's. I do have some nginx rate limiting set up as well to keep things sane. image

Edit: I just ran into a fun thing while doing all of this. I ran out of WAL (Write Ahead Log) space on the storage volume. I gave it 10GB with expansion, so the primary db node started failing at 20.6GB in size. I just doubled the size of the WAL PVC and that resolved. lol.

Edit 2: Fun waves as it hovers around the 40k threshold

#selfhosting #kubernetes #fediverse #yaml #keda #autoscaling #piefed #lemmy #programming #softwaredevelopment #k8s

 
Read more...

from Michael DiLeo

It is not uncommon for those who travel as the main avenue for their lifestyle to eventually become burned out. The temptation to see and do everything or the FOMO makes it difficult to stay in one place and rest. There is so much to explore! New people, new places, new food! New, new, new, new!

But the one common thing that every long term traveler says to newcomers is “SLOW DOWN!” I am currently hitting that phase myself, but I will add my own opinion that it may not be entirely necessary to travel slowly right away. When beginning travel there is a lot of energy because everything is fresh and this way of living is new. You can travel this way for a while, but it does become tiring.

You become tired of connecting with people and saying goodbye. You become tired from all of the moving around. You become tired because you wake up in the middle of the night and don't remember what country you're in, much less your hostel or colive, or which side of the bed the wall is on. It can feel like you hit a wall and it forces some people to slow down and others choose to stop traveling for a while or all together. I asked other nomads about their experiences with hitting the wall and got some feedback that I didn't necessarily expect.

A few people said that they never hit the wall because they traveled slowly from the start. They prioritized community from the beginning and spend multiple months in a given place. One person said that they spread out the planning stage of travel because having to do it all at once every month or so feels exhausting. Another said that their “currency” is the decision making and that having to plan flights, find neighborhoods, lodging, transportation, etc is is a lot of mental work that can cause them to hit a wall. Others found respite in returning to familiar places rather than chasing “new,” which allows them to have a community in that place. Most of the responses included this common thing – community.

Community is arguably one of our strongest needs and was a recurring theme for respondents and for myself, as well. The running theme among nomads is that they eventually want to have familiar and consistent connections with people. They want to have a community and coliving style life for more time. One month can sound like a lot, but for nomads it really isn't. Time flies and our needs for connection go beyond four short weeks.

So, what is one to do when they hit a wall, or to avoid hitting it? I hit mine at 2 years because I had to return to the US and did a lot of fast travel for a year. The US is not ideal for travelers and nomads and the hostels are relatively few, hotels are expensive, and coliving sites are non-existent. I came to a point where I was in the Netherlands and hardly wanted to leave my bed, but I had more places to go and friends to visit before my coliving in Tarifa, Spain would start. The answer to the question is what every experienced traveler says: slow down. Being able to recognize your needs and listening to your mind and body are important. For this year, 2026, I am going to focus more on slower travel with known people. My original plan was to go to Asia, which I may still do, but even after a month of being in a colive in Tarifa, I'm not ready to hit the road. The emotional load is still a bit strong and my energy hasn't recovered yet. I want to be a lazy bum, which is OK, though I don't want to go too far and fall into a pit with it.

My plans this year are to do more with the Wifi Tribe, to slow down and spend more time with people. I'm hoping that I may really hit it off with some new friends and see them longer than a month at a time, but we'll see. I may still go to Asia, but that's a problem for future Michael when he goes to Namibia in January on a Wifi Tribe chapter. But I suppose my real goal is to foster good relationships and connections. It can be hard while traveling, but one benefit of coliving is that you see people more frequently than when back home. I can see people throughout the day rather than waiting until the weekend. I think this year will be a good one. 🤞

#travel #digitalnomad #nomad #nomading #colive #loneliness #resting #blog

 
Read more...

from Programming

It started with a perfectly good and running kubernetes cluster hosting fediverse applications at keyboardvagabond with all the infrastructure and observability that comes with it. I've worked in kubernetes environments for a while, but lacked being able to see how everything comes together and what it means; I also wanted to host some fediverse software for the digital nomad community.

I followed a guide on bare metal kubernetes setup with hetzner (though you should definitely NOT change cluster.local like it says) with some changes, adjustments, and modifications over time to suite my scenario. While I was getting up and running with my 3 cluster VPS servers, I became nervous about resource usage. The applications that I host are currently more ram needy than cpu and the nodes with all of the applications were using ~12GB out of the 16GB available. I decided to make 2 of the 3 nodes worker nodes and have one control plane node. The control plane is the one that determines what the other nodes are doing and hosting. Put a pin in this, it'll come back later.

I also was able to migrate from DNS entries on exposed ports to Cloudflare tunnels and Tailscale for VPN access. This means that no one can try to input commands on the Talos or Kubernetes ports, as they're no longer exposed. You'd need to figure out the encryption key to be able to do it, but now it's even safer. Put a pin in that.

This has been very much a learning process for me in a lot of ways, and I hope that I haven't forgotten too much – it's funny how memory is. I've been taking a lot of notes and having claude/cursor draw up summaries that I leave lying around. It's funny how much sense your documentation makes until you come back 3 months later.

One of the issues that was in the back of my mind was that I had configured the Talos configuration launch kubernetes with the port number specified and I was using the external IP. This was a mistake, because it meant that the nodes were primarily communicating with each other externally rather than over the VLAN, or the internal network. Internal traffic still happened, as I believe that service to service communication would go via kubernetes to a local IP. However, I eventually got a broken dashboard working that showed me the network traffic by device, but it was all on eth0, the external ethernet, not the VLAN. I then checked the dashboards on the provider and it showed 1.8TB of internet usage. That's within my budget, thankfully, but way too much for a single-user cluster, as I had not yet announced the services as open to the public.

I wanted to get this working before going live, so I figured that I would start with n3, one of the workers. I have an encrypted copy of the Talos machine config, but couldn't decrypt it, so I copied n2, changed the IP to the internal 10.132.0.30, and applied...... I forgot to change the host name from n2 to n3.

No biggie, I'll change it and apply....timeouts. Tailscale is no longer connected to the cluster. I spent an hour trying to get access, working with Claude for ideas and work-arounds. No dice. I believe what happened was that in the confusion of 2 nodes with the same name, Tailscale was likely running on n3 and was no longer accessible and the weird state of things caused it to not be spun up on the other nodes. If it wasn't a weird state it was because at my scaling with redundant services and two nodes don't have the RAM available to handle everything from a failed node. But either way, I had to get back in to the cluster.

I went into the VPS dashboard and rebooted the server into recovery mode, wiped the drive, re-installed, and tried to re-join the cluster. This should have been fine as I ensure that there are 2 copies of all storage volume across the nodes in addition to nightly s3 backups. In hind-sight, I might have been better rebooting talos into maintenance mode. But it didn't rejoin the cluster. It turns out that I was missing a particular network configuration that would allow a foreign node to join. That doesn't happen automatically, there's allow-listing for the IP address and some other network policies that need to exist to allow it and I was missing one for one of the talos ports.

I need to get to the control plane node, n1. I rebooted into Talos maintenance mode and apply the new configuration, but it's logging that it can't join a cluster and that I need to bootstrap it to join. I guess that makes sense, it was the only control plane. I get it up and running and progressively add n3 and n2 and they re-join. I reinstall the basic infrastructure to get running and then let FluxCD restart all of the services. The majority boot up, but I notice that a couple of services are blank. No existing data.

I check the longhorn UI, which is what I use to manage storage, and I don't see a lot of volumes, but I see about 50 orphans.... Crap. All volumes were orphaned. When I put n1 into maintenance mode and then bootstrapped, I thought that longhorn would see the volumes and put them back with the services that they belonged to. However, when I redid n1, etcd, the part that manages cluster resources, was cleared and all that storage information lost who and what it belonged to. Learning is painful sometimes.

I tried to take a look at the volumes, but Talos is pretty minimal, so Claude made a pod with alpine and XFS (my file-system) tools that would attach a specific orphan volume, mount it, and try to look at the contents to see what it belonged to. Some things were fairly easy to identify, such as the WriteFreely blog, which is one of the first services that I loaded and uses its own SQLite database. I got that up and running. I also use harbor registry to be a mirror proxy and allow me to privately push my own builds – it was all 0s, or at least the first 100MB were. That's not a huge deal. The database volumes were intact, but I couldn't really get those running, so I'd have to re-create it.

I gradually got these services running and re-configured. Once Harbor is up, images should start getting pulled and cached. But redis failed to pull. That's weird.

But first let me get the database running with CloudNative Postgres. I got it up, but the database was empty, so back to looking at orphans. The tricky thing here is that a few applications have their own postgres databases, such as Harbor Registry. So instead of looking at the file structure I also had to find out what tables were there, but even when I found them, I didn't know which orphan belonged to the primary rather than a replica. In the end, I decided to restore the latest nightly backup and then had Claude arrange a “swap” where it replaces the current “volume claim” with a pinned “volume” name. Essentially, the database pod has a PVC (persistence volume claim) and I want to have the claim that is used be pointed to the recovered volume. So I had claude execute those steps, which unfortunately can leave you with a PVC in your source code that has a volume reference, which you can get rid of, but may or may not be immediately worth it. I restarted and postgres shows all of the databases that I expect.

Next is to fix redis. It turns out that not only Harbor was using Bitnami helm charts (pre-made configurations for kubernetes), but so was the redis cluster. I run with a main and 2 replicas on the 3 nodes. It was failing because Bitnami no longer wants to provide free charts, so they moved everything to bitnamilegacy. No biggie, I'll just change the image and repository that's used and it'll load. Redis loaded, but then there was another component called “redis-exporter” for metrics that seemed to ignore the image override. I then spent the next few hours trying to get it to work and experimenting with other helm charts that provide a cluster arrangement. I settled on one and got redis working. I did lose some data as some applications like piefed started running and publishing messages that it received to do work from the 3 days of being off-line. I decided not to try to recover that. Oh well, it's only social media. Once I go live there will be more current things to look at. It was a pain, though.

After this, I spent quite a few hours fixing small issues with getting FluxCD to reconcile the state of things, especially since I had made changes to PVCs, which are immutable. That took quite a few more hours to either recreate or undo changes so that FluxCD was happy. Eventually everything came online despite me hitting Docker rate limits. I rebuilt the rest of the various fediverse apps, as I have custom builds for Bookwyrm (books), Piefed (reddit), and Pixelfed (instagram) for my kubernetes cluster.

I then began to rebuild the dashboards that I had lost. I still don't have all of them, but at least now that networking tab show a LOT of devices, including the VLAN. Mission accomplished? I did do one extra and got a log view of long-running queries from different apps that I could annoy the developers with, but they look like some easy fixes with some indexes and light code changes, hopefully.

I still need to rebuild the redis dashboards, as I had some metrics for the different event queues that the apps use, which I could use to monitor is something bad happened. On ocassion, if another server fails to respond, it could cause a queue backup, as I don't believe the varioius apps are “grouping” by domain name, which is a feature with the redis XGROUP command.

Here's a funny thing, though. After getting the services up and running for a couple of days, the RAM usage is the same with 3 control plane nodes as it was with just one, so my worries were for nothing and cost me the cluster.

As part of the recovery, I took the opportunity to create a VIP for talos. This is a static IP address that the different control planes vote on for who is managing. So I changed the talos host from a domain name, such as api.mycluster.com to that IP of 10.132.0.5. I also took the time to migrate from Tailscale's subnet route setup to their operator helm chart. This should let me expose different services over the VPN with a domain name using their MagicDNS system and a meta attribute on the service. I haven't done that yet, though.

This disaster was avoidable and could have been a few minute upgrade if I did everything right, but I was able to take the opportunity to fix some other networking and service issues that I was too afraid to do on a running environment. Now all of my services are communicating over the VLAN, I have a VIP for Talos, Tailscale is upgraded, I've migrated more off of Bitnami, and I can now properly handle a node failure except for full service restarts. I would still have to scale down some things manually for that fail-over. But nobody is making or losing money off of this, except for me and my VPS provider, so good enough.

In the end, I got up and running, and the AI was actually quite helpful for debugging issues and quickly generating commands and templates for volume recovery. It was nice being able to let it either work or run a script to examine the orphan volumes for me. I did have to play around with getting it to create notes to go to new contexts as they would get full quickly once I ran out of Claude usage with my plan. I'm glad I didn't have to type a bunch of stuff myself. Of course, AI is still “that looks about right”, which is a thing that I'm aware of, but it wound up being a useful tool for this recovery.

The other thing that helped a good bit was I was actually in another town to visit an old travel friend. Normally I'm the type of person to obsess about a problem until it's solved, but I was there to visit a friend and nobody's livelihood depends on this. So I pulled myself away to go hang and even after just 15 minutes away from the keyboard I'd start getting new ideas or realizing something new. That's one reason the recovery took several days, because I was still living (and obsessing). The mandatory breaks were probably the most helpful things that I could have done – I just don't know how to replicate those.

#talos #kubernetes #selfhosting #fediverse #keyboardvagabond #whybitnamiwhy #cluster #vps #failover #distasterRecovery

 
Read more...

from Michael DiLeo

Earlier in 2025, I joined a group of friends in New Orleans at one of their houses for a writing event for a local library. It was a poetry or writing competition, I believe. Of course, I didn't think I'd get an entry, but it would be fun to write, which I haven't done much of besides the occasional blog post. I had an idea in mind and wanted to keep things abstract enough that people could read into it what they wanted, but not so much that it was lame. I don't know how I did, but I had fun.

Little Dots

There once were two little dots.
Off to one side, once bounces. It likes to bounce up and down and to dart from side to side.

Another dot changes shape and moves. It swings and flows. Normally, they bounce and flow alone, but when they're together, they love to play. They can do things together that they can't do alone.

While one bounces, the other will bend and stretch, and together they fly high into the sky. They will flow and stretch and play. The bouncing dot teaches the flowing dot to dart and the flowing dot teaches the bouncing dot to bend. Together they have so much fun.

One day a line appears. It can spin and swing. It's fast and strong. The dots try to play, but the bouncing and bending are too much for the line. It hits the dots. It wants them apart. To only bounce and dart, to only bend and flow. Not together and never both.

The dots try to play alone and the miss what they can't do when they're alone. They can't go high and far like the used to.

Sometimes they try to do what the other taught them, but the line hits them when it sees. Sometimes one will distract th line and the other will play like it learned from its friend.

It's hard to get away from the line. It's very fast and it won't leave. It's also stronger than either dot. They can't allow the line any further. But they can't stop it alone.

They wait for the line to be away and come up with a plan. The line is fast, but it cannot flow. It is also strong, but it cannot dart. One dot will bend around it while the other pushes it away. So the wait and when the line doesn't expect, they wrap and push, pull and move. But the line is fast. It does not want to leave. It was having fun.

After many tries, and the line escaping, they catch it. The line grows tired – it cannot escape.

We want to play like before! They say. No! says the line. I cannot bend and dart. It's not fun for me! But we cannot spin and swing like you can!

The dots try to convince the line to let them play together. It could be all three or just the two. It can leave and be alone, or play together with the dots. What will you choose?

#blog #blogging #shortstory

 
Read more...

from Programming

Edit: The below didn't work. Jump to the edit to see the current attempt.

I'm experimenting with where to put these types of blog posts. I have been putting them on my home server, at gotosocial.michaeldileo.org, but I'm thinking of moving over here instead of a micro-blogging platform.

Longhorn, the system that is used to manage storage for Keyboard Vagabond, performs regular backups and disaster recovery management. I noticed that on the last few billing cycles, the costs for S3 cloud storage with BackBlaze was about $25 higher than expected, and given that the last two bills were like this, it's not a fluke.

The costs are from s3_list_objects, over 5M calls last month. It turns out this is a common thing that has been mentioned in github, reddit, Stack Overflow, etc. The solution seems to be just to turn it off. It doesn't seem to be required for backups and disaster recovery to work and Longhorn seems to be doing something very incorrectly to be making all of these calls.

...
data:
    default-resource.yaml: |-
        ...
        "backupstore-poll-interval": "0"

My expectation is that future billing cycles should be well under $10/month for storage. The current daily average storage size is 563GB, or $3.38 per month.

#kubernetes #longhorn #s3 #programming #selfhosting #cloudnative #keyboardvagabond

Edit – the above didn't work (new solution below)

Ok, so the network policy did block the external traffic, but it also blocked some internet traffic that caused the pods to not be in a ready state. I've been playing around with variations of different ports, but I haven't found a full solution yet. I'll update if I get it resolved. I got it. I had to switch to a CiliumNetworkPolicy

I also tried changing the polling interval from 0 to 86400, though I think the issue is ultimately how they do the calls, so bear this in mind if you use Longhorn. Right now I'm toying around with the idea of setting a cap, since my backups happen after midnight, so maybe gamble on the cap getting reset and then a backup happening, then at some point the cap gets hit and further calls fail until the morning? This might be a bad idea, but I think that I could at least limit my daily expenditure.

One thing to note from what I read in various docs is that in Longhorn v1.10.0, they removed the polling configuration variable since you can set it in the UI. I still haven't solved the issue, ultimately.

I see that yesterday longhorn made 145,000 Class C requests (s3-list-objects). I found on a github issue that someone solved the issue be setting a network policy to block egress outside of those hours. I had Claude draw up some policies, configurations, and test scripts to monitor/observe the different system states. The catch, though, is that I use FluxCD to maintain state and configuration, so this policy cannot be managed by flux.

The gist is that a blocking network policy is created manually, then there are two cron jobs: one to delete the policy 5 minutes before backup, and another to recreate it 3 hours later. I'm hoping this will be a solution.

Edit: I think that I finally got it. I had to switch from a NetworkPolicy to CiliumNetworkPolicy, since that's what I'm using (duh?). using toEntities: kube-apiserver fixed a lot of issues. Here's what I have below. It's the blocking network configuration and the cron jobs to remove and re-create it. I still have a billing cap in place for now. I found that all volumes backed up after the daily reset. I'm going to keep it for a few days and then consider whether to remove it or not. I now at least feel better about being a good citizen and not hammering APIs unnecessarily.

---
# NetworkPolicy: Blocks S3 access by default
# This is applied initially, then managed by CronJobs below
# Using CiliumNetworkPolicy for better API server support via toEntities
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: longhorn-block-s3-access
  namespace: longhorn-system
  labels:
    app: longhorn
    purpose: s3-access-control
spec:
  description: "Block external S3 access while allowing internal cluster communication"
  endpointSelector:
    matchLabels:
      app: longhorn-manager
  egress:
    # Allow DNS to kube-system namespace
    - toEndpoints:
      - matchLabels:
          k8s-app: kube-dns
      toPorts:
      - ports:
        - port: "53"
          protocol: UDP
        - port: "53"
          protocol: TCP
    # Explicitly allow Kubernetes API server (critical for Longhorn)
    # Cilium handles this specially - kube-apiserver entity is required
    - toEntities:
      - kube-apiserver
    # Allow all internal cluster traffic (10.0.0.0/8)
    # This includes:
    # - Pod CIDR: 10.244.0.0/16
    # - Service CIDR: 10.96.0.0/12 (API server already covered above)
    # - VLAN Network: 10.132.0.0/24
    # - All other internal 10.x.x.x addresses
    - toCIDR:
      - 10.0.0.0/8
    # Allow pod-to-pod communication within cluster
    # The 10.0.0.0/8 CIDR block above covers all pod-to-pod communication
    # This explicit rule ensures instance-manager pods are reachable
    - toEntities:
      - cluster
    # Block all other egress (including external S3 like Backblaze B2)
---
# RBAC for CronJobs that manage the NetworkPolicy
apiVersion: v1
kind: ServiceAccount
metadata:
  name: longhorn-netpol-manager
  namespace: longhorn-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: longhorn-netpol-manager
  namespace: longhorn-system
rules:
- apiGroups: ["cilium.io"]
  resources: ["ciliumnetworkpolicies"]
  verbs: ["get", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: longhorn-netpol-manager
  namespace: longhorn-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: longhorn-netpol-manager
subjects:
- kind: ServiceAccount
  name: longhorn-netpol-manager
  namespace: longhorn-system
---
# CronJob: Remove NetworkPolicy before backups (12:55 AM daily)
# This allows S3 access during the backup window
apiVersion: batch/v1
kind: CronJob
metadata:
  name: longhorn-enable-s3-access
  namespace: longhorn-system
  labels:
    app: longhorn
    purpose: s3-access-control
spec:
  # Run at 12:55 AM daily (5 minutes before earliest backup at 1:00 AM Sunday weekly)
  schedule: "55 0 * * *"
  successfulJobsHistoryLimit: 2
  failedJobsHistoryLimit: 2
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: longhorn-netpol-manager
        spec:
          serviceAccountName: longhorn-netpol-manager
          restartPolicy: OnFailure
          containers:
          - name: delete-netpol
            image: bitnami/kubectl:latest
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - |
              echo "Removing CiliumNetworkPolicy to allow S3 access for backups..."
              kubectl delete ciliumnetworkpolicy longhorn-block-s3-access -n longhorn-system --ignore-not-found=true
              echo "S3 access enabled. Backups can proceed."
---
# CronJob: Re-apply NetworkPolicy after backups (4:00 AM daily)
# This blocks S3 access after the backup window closes
apiVersion: batch/v1
kind: CronJob
metadata:
  name: longhorn-disable-s3-access
  namespace: longhorn-system
  labels:
    app: longhorn
    purpose: s3-access-control
spec:
  # Run at 4:00 AM daily (gives 3 hours 5 minutes for backups to complete)
  schedule: "0 4 * * *"
  successfulJobsHistoryLimit: 2
  failedJobsHistoryLimit: 2
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: longhorn-netpol-manager
        spec:
          serviceAccountName: longhorn-netpol-manager
          restartPolicy: OnFailure
          containers:
          - name: create-netpol
            image: bitnami/kubectl:latest
            imagePullPolicy: IfNotPresent
            command:
            - /bin/sh
            - -c
            - |
              echo "Re-applying CiliumNetworkPolicy to block S3 access..."
              kubectl apply -f - <<EOF
              apiVersion: cilium.io/v2
              kind: CiliumNetworkPolicy
              metadata:
                name: longhorn-block-s3-access
                namespace: longhorn-system
                labels:
                  app: longhorn
                  purpose: s3-access-control
              spec:
                description: "Block external S3 access while allowing internal cluster communication"
                endpointSelector:
                  matchLabels:
                    app: longhorn-manager
                egress:
                # Allow DNS to kube-system namespace
                - toEndpoints:
                  - matchLabels:
                      k8s-app: kube-dns
                  toPorts:
                  - ports:
                    - port: "53"
                      protocol: UDP
                    - port: "53"
                      protocol: TCP
                # Explicitly allow Kubernetes API server (critical for Longhorn)
                - toEntities:
                  - kube-apiserver
                # Allow all internal cluster traffic (10.0.0.0/8)
                - toCIDR:
                  - 10.0.0.0/8
                # Allow pod-to-pod communication within cluster
                # The 10.0.0.0/8 CIDR block above covers all pod-to-pod communication
                - toEntities:
                  - cluster
                # Block all other egress (including external S3)
              EOF
              echo "S3 access blocked. Polling stopped until next backup window."
 
Read more...