Tuesday, 16 July 2019

Running .NET Core apps on Raspberry Pi Zero

Recently I wrote a small app that I was planning to run on my Raspberry Pi Zero, since it's incredibly compact! With .NET Core now being cross platform, I thought it was a simple affair, so I compiled and deployed the app, targeting the linux_arm runtime, and tried to run it on the device, only to be met with this:


./ConsoleApp
Segmentation fault

I thought something was wrong, so I tried it on a newer Raspberry Pi, and everything worked great.

A quick search later and I learned that the Raspberry Pi Zero uses an armv6 processor, while .NET Core JIT depends on armv7 instructions.

I thought that this was the end of it, but upon doing a little more research, I found that the Mono framework can actually run on Raspberry Pi Zero!

The solution was simple: re-compile the app in Framework Dependent - Portable mode, and copy it onto the device. Instead of having a native library, you end up with your app's dll file.

Then on the Raspberry Pi Zero install mono, and run your app!


sudo apt update
sudo apt upgrade
sudo apt install mono-complete
mono ./ConsoleApp.dll
Hello World!

Success! I hope this helps someone else that ran into the same issue

Setting up MAAS and Kubernetes in a virtualised environment

Setting up MAAS with Kubernetes and persistent storage in a Hyper-V environment, with an existing network and DHCP server. Unfortunately there is limited documentation on running MAAS in an existing network with a DHCP server. Futhermore, there's little to no mention of Hyper-V support.

While it's not the recommended environment, recently I thought to try spin up a Kubernetes clusted in a network, and it seemed like MAAS is now the recommended way to deploy it. Since the existing network was running a Hyper-V cluster, I decided to see how hard would it be to spin up MAAS on top of Hyper-V machines. After experimenting, and several full wipes and clean starts, I ended up with a redundant Kubernetes cluster and distributed storage nodes using Ceph. I decided to outline the process to install and configure it, as well as some things I learned in the process

(Read More)

Sunday, 14 July 2019

Private CA Part 2: Issuing certificates

In the first part, I outlined how to create a new root and an intermediate Certificate Authority using OpenSSL. Once these are created, we can get to the fun part of creating certificates we'll be using for signing web server responses, documents, assemblies etc.

Each certificate has a number of fields that describe it. There are some core fields, like the Serial Number, the Validity periods, Subject, Issuer, Thumbprint etc. These are also extended fields that describe the usage constraints for the certificate - for example, you could create a certificate that can only be used to sign web responses from a specific domain. You could create a certificate that could be used to sign e-mails and documents.

You could just create a certificate that doesn't have any restrictions, and could be used for just about anything. That would be ok for testing purposes, but it's generally recommended to create certificates for individual purposes. This way, if a certificate's private key is compromised, you will only have to reconfigure a single application with a new certificate.

(Read More)

Private CA Part 1: Building your own root and intermediate certificate authority

Getting an SSL certificate these days has become much easier than it was in the past, with the availability of free Certificate Authorities (CAs) like Let's Encrypt. But even so, there are scenarios when you need a certificate that couldn't be issued by them: longer term certificates, complex wildcards, local addresses within your environment, and even routers that are accessed by IP instead of a dns name. Some of these could be issued by a paid CA, others aren't even an option. Code signing certificates are also great, but not cheap, while encryption and authentication certs are generally only issued in enterprise environments.

Getting a self-signed certificate is pretty easy - most routers will generate their own certificates, and it's pretty straightforward to create your own certificate using openssl or similar tools. The problem with self-signed certificates is that they won't be trusted by default. You still get the benefit of your connection being encrypted, but there won't be a guarantee that nobody intercepted your data, altered it and signed it with their own untrusted cert, unless you check the certificate every time. You could always add your certificate to your local trust store, but you'd have to do that for every single certificate you create, on every device you access them, which will quickly become cumbersome.

The solution is simple - you can create your own private CA and add it to your trust store. Any certificates created by that CA would be trusted as well, which makes managing this considerably easier! You wouldn't use these certs on your public website, but they'd be perfect for internal services or your home lab.

Taking one step further, you could also create intermediary CAs, creating a trust chain - the end device certificates would be created by your intermediary CA. If your intermediary CA keys get compromised, you could just revoke them and create a new intermediary, and won't need to update the trust store on your machines.

In these articles I'll put down what I learned while creating my own CA. I've decided to break this down into several parts, to make it easier to digest and manage:

(Read More)

Sunday, 19 May 2019

Adding license details to NuGet packages

I've always tried to make sure I add license details to my open source projects, especially when publishing them to NuGet. Previously, this was done by adding a <licenseUrl> element to your .nuspec file, which would allow users to see license details when downloading packages.

When using the csproj 2017 format (so all .NET Core projects), you could have the dotnet pack command automatically build your nuget package. To populate the license url, you just had to add the following to your project file:

<PropertyGroup>
  <PackageLicenseUrl>https://opensource.org/licenses/MIT</PackageLicenseUrl>
</PropertyGroup>

Starting sometime in 2018, I noticed that my builds started throwing a new warning:

warning NU5125: The 'licenseUrl' element will be deprecated. Consider using the 'license' element instead.

I just assumed that the nuspec format was altered, and the dotnet tool will eventually catch up. But I should have known better - of course the change is deeper than that. Half a year later, the warning is still there, so I decided to check it out, and found this issue on GitHub discussing it: https://github.com/NuGet/Home/issues/7509

Which led me to their wiki, describing the changes to the nuspec/nupkg files regarding adding license details: https://github.com/NuGet/Home/wiki/Packaging-License-within-the-nupkg

Instead of just a single URL, you could either add a license file and point the package to that, or you could add a license expression, where you could describe a combination rule using several well known licenses. In the same article, they also described how to update your csproj file to use the new license field.

In my case, to link to the MIT license, I had to replace the PackageLicenseUrl field with the following:

<PropertyGroup>
  <PackageLicenseExpression>MIT</PackageLicenseExpression>
</PropertyGroup>

Moral of the story? Never assume, and check the docs more thoroughly

Thursday, 01 March 2018

Deploying cross platform images to Docker registries

One thing I noticed when working with docker and cross platform registries was that sometimes you can pull the same image tag from a remote registry and get different images depending on which platform you requested. It was certainly working in a different way than the local list of images! Digging deeper I learned that it wasn't something new, and it was up for close to half a year now! You can read the official announcement here: https://blog.docker.com/2017/09/docker-official-images-now-multi-platform/

Basically, when you try to pull an image from a repository, your client would actually pull a manifest file listing either the details of the image, or a list of images that can be pulled based on the local machine's cpu architecture, os platform and version! This way you can pull the same image on various machines, and have it running on just about any platform you want. This is especially powerful with the release of Docker for Windows 18.03 where you can run both Windows and Linux images on the same machine side by side!

The manifest format is quite simple, and easy to digest! For example, this is the manifest for my Hello World app that I created to test multi platform docker deployments:

(Read More)

Running cross platform containers on Windows in Docker 18.03

With the recent release of Docker for Windows 18.03, I decided to finally start experimenting with it. One of the main features that this release brings is the ability to run both Windows and Linux images side by side, instead of having to switch Docker from Linux to Windows mode. The other benefit is that it runs all the images using the Windows Container infrastructure, instead of running the linux images in a Linux VM on your machine (which had to have some of your cpu and memory permanently allocated to it). This means that unless you're running a container, docker will hardly use any resource at all on your machine! All this combined makes docker for windows considerably more attractive!

There are some issues in this release (at this moment it's part of the edge channel, so it's to be expected), but it's a great step forward.

To start things off, at the moment Docker won't try to auto-detect what platform to use when running the image. Instead, it will always assume you want to run the image in the default system platform, unless specified otherwise. For example, on my Windows 10 machine the following command will pull a windows image:

docker pull hello-world

To get a linux image, you'd have to add the --platform=linux argument:

docker pull --platform=linux hello-world

Now this might not be what you want, and I was wondering if there's a way to change this. Unfortunately I haven't found it documented anywhere obvious, but I eventually stumbled on a GitHub discussion, which pointed me in the right direction. All you need is to set the DOCKER_DEFAULT_PLATFORM environment variable to linux or windows to change the default platform that your cli will use! In my case I switched the default to linux straight away.

(Read More)

Wednesday, 21 February 2018

The woes of setting up and deploying TX Text control

Recently I've had to set up and configure TX TextControl, and I found this to be quite a bit more challenging than I expected it to be. Mainly this was because of the serious lack of quality documentation. That's not to say that they don't have docs, it's just that it's... confusing and not very well structured. It took me some time and a number of trials and errors to finally have it set up and running on the server. I thought it might be useful for me to share my experience with this, and provide a short and simple summary of the configuration I ended up with.

(Read More)

Monday, 19 February 2018

Adding Upsert support for Entity Framework Core

Like many others, I have used Entity Framework and have been a fan of the simplicity it allows in accessing database. It's quite powerful and can be used to execute a large variety of queries. Any queries that can't be expressed using Linq syntax I usually move to a stored procedure or a function and call that from EF. One thing that I've always moved to a stored procedure was the Upsert command.

Actually, I've never called it upsert until recently when I stumbled upon a reference to it. Since the database engine I've worked with most is SQL Server, I've used the MERGE statement to execute an atomic (not really) UPDATE/INSERT, and it looks something like this:

MERGE dbo.[Countries] AS [T]
USING ( VALUES ( 'Australia', 'AU' ) ) AS [S] ( [Name], [ISO] )
    ON [T].[ISO] = [S].[ISO]
WHEN MATCHED THEN
    UPDATE SET
        [Name] = [S].[Name]
WHEN NOT MATCHED BY TARGET THEN
    INSERT ( [Name], [ISO] )
    VALUES ( [Name], [ISO] );

Other databases that I started working with recently have similar syntax available. For example, in PostgreSQL, one could use the INSERT … ON CONFLICT DO UPDATE syntax:

INSERT INTO public."Countries" AS "T" ( "Name", "ISO" )
VALUES ( 'Australia', 'AU' )
ON CONFLICT ( "ISO" )
DO UPDATE SET "Name" = 'Australia'

I thought that it would be interesting to see whether this would be possible to do in Entity Framework directly, rather than having to write it in SQL. Out of the box EF doesn't support it, even though there is interest in adding it, and there's even an issue on EF Core's GitHub project discussing this. But the concept itself is simple, so I thought it would be an interesting project to play around with.

(Read More)

Saturday, 01 June 2013

Using local .aar Android library packages in gradle builds

Since Gradle became the new build system for Android there were a lot of questions popping up all over the net about how to use it. The new build system comes with a number of great features like multi project builds, android archive packages (.aar) for android libraries and so on. Unfortunately, since the new build system is quite fresh (version 0.4.2 at the moment), the documentation is rather limited, so not everything is clear and simple.

For example, if you have a solution with an android library and an android app (that depends on the library), your build will work just fine. But what if you want to decouple the library, keep it separate so that you can use it in other projects or share it with the community? The Gradle build system will package it as an android archive package (.aar) and you can add that as a dependency to your projects. The only problem is that referencing .aar packages locally doesn't work very well, and it seems like that's by design. As explained by +Xavier Ducrichet in this comment:

using aar files locally can be dangerous. I want to look at either detecting issues or putting huge warnings.

This means that to add a reference to an .aar package it it would have to ideally be stored in the central maven repository (now that maven central finally supports android archive packages!). But what if that's not an option, for example if the library you're referencing is in development?

(Read More)

Saturday, 25 May 2013

Referencing android library packages in Gradle

UPD: Seems like referencing local .aar packages is not recommended. But you can just as easily set them up in a local maven repo, which will work even better!

Using local .aar Android library packages in gradle builds


Playing with the new Gradle Android build system, I created some multi project setups, and it seems to work great! I had a project with a main android app, an android library and a java library all wired up and working well.

But once I tried to decouple the android library to a separate location, and just inject the .aar package in the project depencency list I ran into a problem. The project completely refused to build, stating:

:mainapp:packageDebug
Error: duplicate files during packaging of APK D:\Development\MyProject\mainapp\build\apk\mainapp-debug-unaligned.apk
        Path in archive: AndroidManifest.xml
        Origin 1: D:\Development\MyProject\mainapp\build\libs\mainapp-debug.ap_
        Origin 2: D:\Development\MyProject\mainapp\libs\AndroidExtensions.aar

Everything seemed to be configured correctly, the android library was producing a proper .aar library package file, and I was sure that it should work out of the box, but it was just refusing to work...

The solution was actually much simpler than I expected:

(Read More)

Adding support for AndroidAnnotations in Gradle projects

Recently I've started migrating my Android projects to the new Gradle build system, and I've been quite impressed. The new build system is quite powerful, but at the same time it is rather fresh, which means there are some things that aren't supported or documented yet.

One of the things that took me quite a bit of time to research and figure out was adding support for the AndroidAnnotations project. While it seems like the annotation building is enabled out of the box, this specific library has some characteristics that only make it work after some custom configuration. One of the main issues was the fact that the library is looking for the Android manifest file by going up from the generated files. And considering that the Gradle project structure is different compared to the old one, this causes issues.

(Read More)

Sunday, 26 June 2011

Dual Battery Widget for Android

This is one of my first projects for Android, and the first one that was released in the wild.

A Battery Widget that displays the status for the internal battery, and the secondary battery in your dock station. It is mostly intended for the Asus Eee Pad Transformer tablet (a device that actually has a second battery).

The widget is resizable, and can be as little as 1 cell on your screen, or fill your whole home screen! Advanced options allow you to change the size and the position of the status text, and hide the second battery gauge when it is disconnected. On devices without the dock the widget will only show the main battery icon.

(Read More)

Say Hello to Android

For the last month or so, I've been slowly entering a new territory for myself, a new development platform that I've hardly touched before. My new obsession is developing for Android!

Even though I've used smartphones for years and years, I never really got into mobile development. I've have been using Symbian and Windows Mobile based phones since 2005, I believe, and I only got a little past downloading the SDK. Once I got more serious about windows development (the C# era for me) I tried making some apps for Windows Mobile, but it never really stuck to me, never felt natural, or interesting.

Now, I've been using Android phones for almost a year now, and it proved to be really exciting. I love the platform, I love the community, I love the way it allows apps integrate with each other. It's something I haven't seen before, and more. So I decided to give it a try, and couldn't stop since!

At the moment, most of my free time I'm dedicating to the couple projects that I started. One of them it already out in the wild, and within a week it grew it's own fan base. The others are still in the incubation period, but I believe they're growing nicely. I should have some more updates soon about them, as well as my thoughts (and hopefully tips and tricks) about this big new world it opened for me.

Thursday, 24 February 2011

Server Manager 2008 failing to detect system status

While setting up a new server recently I ran into an unexpected problem. It was quite a straightforward setup, nothing unusual. A Server 2008 R2 os, with SQL, IIS, a couple basic services and the latest updates.

A couple restarts later, I open the server manager, and am greeted with a white page, and the request to check some error logs. Specifically the following branch: Event Viewer -> Applications and Services Logs -> Microsoft -> Windows -> ServerManager -> Operational. Inside, I've found a number of error logs, all of them complaining about the same thing:

Could not discover the state of the system. An unexpected exception was found:
System.Runtime.InteropServices.COMException (0x800F0818): Exception from HRESULT: 0x800F0818

With a system that was just installed the previous day, I could just drop it and start from scratch, but I didn't want to just give up. Plus, I was curious about what exactly was causing the problem. So I decided to dig deeper, and check online what exactly was the problem.

(Read More)

Pages: 1 2 3