<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Aaron Lenoir - At least it works.]]></title><description><![CDATA[I wonder how ...]]></description><link>https://blog.aaronlenoir.com/</link><generator>Ghost 2.38</generator><lastBuildDate>Thu, 12 Feb 2026 15:11:21 GMT</lastBuildDate><atom:link href="https://blog.aaronlenoir.com/author/aaron-2/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Building Electron apps for Windows on Debian]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>I'm playing around with <a href="https://electron.atom.io/">Electron</a> lately.</p>
<p>One of the things I wanted to do was build the Electron Application for Windows on my Linux development box, which runs Debian.</p>
<p>I used <a href="https://github.com/electron-userland/electron-builder">electron-builder</a> for building the app.</p>
<p>In this post I'll focus on making electron-builder build the app for windows and</p>]]></description><link>https://blog.aaronlenoir.com/2017/03/03/building-electron-apps-for-windows-on-debian/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa9</guid><category><![CDATA[english]]></category><category><![CDATA[electron]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Fri, 03 Mar 2017 23:34:35 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>I'm playing around with <a href="https://electron.atom.io/">Electron</a> lately.</p>
<p>One of the things I wanted to do was build the Electron Application for Windows on my Linux development box, which runs Debian.</p>
<p>I used <a href="https://github.com/electron-userland/electron-builder">electron-builder</a> for building the app.</p>
<p>In this post I'll focus on making electron-builder build the app for windows and will ignore setting up electron, electron-builder and making the electron app.</p>
<h2 id="tldr">TL;DR</h2>
<p>You need Wine 1.8+. Debian will install Wine 1.6.2.</p>
<p>To install Wine 1.8+ on Debian:</p>
<ul>
<li>Add Jessie Backports to /etc/apt/sources.list</li>
<li>See: <a href="https://wiki.debian.org/Wine#Installation_from_Jessie_backports">Installation from Jessie Backports</a></li>
</ul>
<h2 id="configuration">Configuration</h2>
<p>To make electron-builder build for windows, the following is needed in package.json:</p>
<pre><code>  &quot;build&quot;: {
    &quot;appId&quot;: &quot;com.aaronlenoir.gnucash-reporter&quot;,
    &quot;win&quot;: {
      &quot;target&quot;: [
        &quot;zip&quot;
      ]
    }
  },
</code></pre>
<p>For convenience, I have the following scripts too:</p>
<pre><code>  &quot;scripts&quot;: {
    ...
    &quot;pack&quot;: &quot;build --dir&quot;,
    &quot;dist&quot;: &quot;build&quot;,
    &quot;pack-win&quot;: &quot;build --dir --win&quot;,
    &quot;dist-win&quot;: &quot;build --win&quot;
  },
</code></pre>
<p>That way, I can run <code>npm run pack-win</code> to build for windows.</p>
<h2 id="building">Building</h2>
<p>Electron-builder needs Wine in order to build for windows. So I had installed it using apt:</p>
<pre><code>apt-get install wine
</code></pre>
<h3 id="problem1wineversion">Problem 1: Wine version</h3>
<p>The first run, I got the following error:</p>
<pre><code>$ npm run pack-win

&gt; gnucash-reporter@0.0.1-alpha pack-win /home/user/electron/gnucash-reporter
&gt; build --dir --win

Skip app dependencies rebuild because platform is different
TypeError: Invalid Version: it
    at new SemVer (/home/user/electron/gnucash-reporter/node_modules/semver/semver.js:293:11)
    at compare (/home/user/electron/gnucash-reporter/node_modules/semver/semver.js:566:10)
    at lt (/home/user/electron/gnucash-reporter/node_modules/semver/semver.js:600:10)
    at /home/user/electron/gnucash-reporter/node_modules/electron-builder/src/packager.ts:473:7
...
</code></pre>
<p>Note the strange message: <code>TypeError: Invalid Version: it</code></p>
<p>Looking at the stack trace, it looked like it was checking the version of Wine. But apparently the version was not a valid <a href="http://semver.org/">Semver</a> version number: &quot;it&quot;.</p>
<p>The mystery was solved however, asking wine for its version:</p>
<pre><code>$ wine --version
it looks like multiarch needs to be enabled.  as root, please
execute &quot;dpkg --add-architecture i386 &amp;&amp; apt-get update &amp;&amp;
apt-get install wine32&quot;
wine-1.6.2
</code></pre>
<p>Now it was clear where &quot;it&quot; came from. So I followed the instructions and ran:</p>
<pre><code>dpkg --add-architecture i386 &amp;&amp; apt-get update &amp;&amp; apt-get install wine32
</code></pre>
<p>After that, wine had the following version:</p>
<pre><code>$ wine --version
wine-1.6.2
</code></pre>
<p>So I ran <code>npm run pack-win</code> again:</p>
<p>And received the error:</p>
<pre><code>Error: wine 1.8+ is required, but your version is 1.6.2, please see https://github.com/electron-userland/electron-builder/wiki/Multi-Platform-Build#linux
</code></pre>
<h3 id="problem2gettingwine18">Problem 2: Getting Wine 1.8</h3>
<p>I'm not very familiar with debian, but I always assumed apt would give me the latest stable version of applications.</p>
<p>I followed the instructions on: <a href="https://github.com/electron-userland/electron-builder/wiki/Multi-Platform-Build#linux">https://github.com/electron-userland/electron-builder/wiki/Multi-Platform-Build#linux</a></p>
<p>But they don't work well on Debian, since it can't find the appropriate packages.</p>
<p>I found these instructions on the <a href="https://wiki.debian.org/Wine#Step_2:_Installation">debian wiki</a>, which worked:</p>
<ul>
<li>Add this line line to /etc/apt/sources.list</li>
</ul>
<pre><code>deb http://httpredir.debian.org/debian jessie-backports main
</code></pre>
<ul>
<li>Update the package lists</li>
</ul>
<pre><code>sudo apt-get update
</code></pre>
<ul>
<li>Then, for 64-bit systems:</li>
</ul>
<pre><code>sudo apt install \
      wine/jessie-backports \
      wine32/jessie-backports \
      wine64/jessie-backports \
      libwine/jessie-backports \
      libwine:i386/jessie-backports \
      fonts-wine/jessie-backports
</code></pre>
<p>After this:</p>
<pre><code>$ wine --version
wine-1.8.6 (Debian 1.8.6-5~bpo8+1)
</code></pre>
<h3 id="results">Results</h3>
<p>After this the build succeeded:</p>
<pre><code>user@debian:~/electron/gnucash-reporter$ npm run pack-win

&gt; gnucash-reporter@0.0.1-alpha pack-win /home/user/electron/gnucash-reporter
&gt; build --dir --win

Skip app dependencies rebuild because platform is different
Packaging for win32 x64 using electron 1.4.15 to dist/win-unpacked
user@debian:~/electron/gnucash-reporter$ ls
build  dist  install-builder-deps.sh  LICENSE  node_modules  package.json  README.md  src  test
user@debian:~/electron/gnucash-reporter$ cd dist/
user@debian:~/electron/gnucash-reporter/dist$ ls
linux-unpacked  win-unpacked
</code></pre>
<p>win-unpacked, in windows, looks like this:</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2017/03/2017-03-04-00_18_48-win-unpacked.png" alt></p>
<p>And running the executable ran the application without any issues.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2017/03/2017-03-04-00_20_31-Editor---Aaron-and-the-Buzzwords.png" alt>.</p>
<h1 id="conclusion">Conclusion</h1>
<p>You need:</p>
<ul>
<li>Electron</li>
<li>Electron-Builder</li>
<li>Wine 1.8+</li>
</ul>
<p>Debian runs Wine 1.6.2 standard, and you need to add the &quot;jessie backports&quot; package source to your system in order to install Wine 1.8+.</p>
<p>For reference, the practice application I'm building in electron is on github: <a href="https://github.com/AaronLenoir/gnucash-reporter">https://github.com/AaronLenoir/gnucash-reporter</a>.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Introducing blog.aaronlenoir.com]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>This month I was able to get <a href="https://blog.aaronlenoir.com">https://blog.aaronlenoir.com</a> up and running.</p>
<p>This means I got:</p>
<ul>
<li>a domain name (aaronlenoir.com);</li>
<li>an easier URL for my blog (it used to be <a href="https://blog-aaronlenoir.rhcloud.com">https://blog-aaronlenoir.rhcloud.com</a>);</li>
<li>an SSL Certificate</li>
</ul>
<p>My main motivation was that if I need to move</p>]]></description><link>https://blog.aaronlenoir.com/2017/01/15/introducing-blog-aaronlenoir-com/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa2</guid><category><![CDATA[english]]></category><category><![CDATA[blog]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Sun, 15 Jan 2017 21:37:45 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>This month I was able to get <a href="https://blog.aaronlenoir.com">https://blog.aaronlenoir.com</a> up and running.</p>
<p>This means I got:</p>
<ul>
<li>a domain name (aaronlenoir.com);</li>
<li>an easier URL for my blog (it used to be <a href="https://blog-aaronlenoir.rhcloud.com">https://blog-aaronlenoir.rhcloud.com</a>);</li>
<li>an SSL Certificate</li>
</ul>
<p>My main motivation was that if I need to move my blog to some other hosting, the old URL's will still work. I'd just point the domain to another host.</p>
<p>With the rhcloud.com URL, this would not be possible.</p>
<p>I'm still hosting this blog on RedHat's OpenShift using Ghost. Setting it all up was interesting. So here's what needed to be done.</p>
<h1 id="gettingadomainname">Getting a domain name</h1>
<p>I never bought a domain name, so I was unsure what would be involved.</p>
<p>I really wanted aaronlenoir.be but that was already taken.</p>
<p>At first I looked at <a href="https://www.dnsimple.com">dnsimple</a>, because I heard a lot of good thing about them. But I'd probably not need most of their services. They would be a better fit if you need to manage more domains. I just needed one domain name.</p>
<p>Then I looked at both <a href="https://www.godaddy.com/">GoDaddy</a> and <a href="https://www.namecheap.com/">Namecheap</a>. In general, I read more postive things about Namecheap than GoDaddy. From what I understand, GoDaddy is really cheap but they charge extra for almost any additional feature.</p>
<p>I went with Namecheap and bought aaronlenoir.com for $20 for two years. This includes a 1 year WhoisGuard subscription. WhoisGuard ensure that when you look up my domain information, you don't see my personal details.</p>
<p>That was really all there was to it.</p>
<h1 id="pointthedomaintotheblog">Point the Domain to the blog</h1>
<p>When you get a domain, you can create a number of sub-domains (for example blog.aaronlenoir.com).</p>
<p>Since I already had hosting, it was a matter of pointing blog.aaronlenoir.com to blog-aaronlenoir.rhcloud.com).</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/11/2016-11-21-23_38_23-Advanced-DNS.png" alt></p>
<p>Then in the OpenShift Web Console, open the app, click on &quot;change alias&quot; and add the domain.</p>
<p>It takes a while for this change to have effect. After about half an hour I was able to visit my blog from the domain.</p>
<h1 id="fixingghostredirect">Fixing Ghost redirect</h1>
<p>When I visited: <a href="http://blog.aaronlenoir.com">http://blog.aaronlenoir.com</a> I was always redirected to <a href="https://blog-aaronlenoir.rhcloud.com/">https://blog-aaronlenoir.rhcloud.com/</a></p>
<p>The reason for this was the configuration of the blog (Ghost). In the config file, I had to hard-code the domain name, instead of using the OpenShift environment variable. In config.js change</p>
<pre><code>url: 'https://'+process.env.OPENSHIFT_APP_DNS,
</code></pre>
<p>to</p>
<pre><code>url: 'https://blog.aaronlenoir.com',
</code></pre>
<p>I could've added some environment variables, but it's fine.</p>
<h1 id="applyinghttps">Applying Https</h1>
<p>So that is step 1, and it works when I use &quot;http&quot;. But I want to go for HTTPS.</p>
<p>This used to be a pain, but now this should be quite easy to do, for no money.</p>
<h2 id="gettingacertificate">Getting a Certificate</h2>
<h3 id="letsencrypt">Let's Encrypt!</h3>
<p>The first thing to do is getting a certificate. Traditionally, you'd had to purchase one from a Certificate Authority.</p>
<p>Luckily, now there is &quot;Let's Encrypt&quot; (<a href="https://www.generosity.com/community-fundraising/make-a-more-secure-web-with-let-s-encrypt">support them!</a>). They provide free certificates and potentially automatically renew them.</p>
<p>The main concept is that if you can prove you own the domain then you get a certificate. This can be proven by uploading a specific file to the domain. Let's encrypt the checks from their end that the file is available and then provides you with a certificate.</p>
<p>If you have root or adminstrator access to the server, this can be fully automated. In my case, on OpenShift, I do not have this and I cannot run the automatic tools.</p>
<p>So I had to perform this manually.</p>
<h3 id="gettingacertificateonopenshiftmanually">Getting a Certificate on OpenShift Manually</h3>
<p>To use Let's Encrypt I must be able to upload a specific file to the following location:</p>
<ul>
<li><a href="http://blog.aaronlenoir.com/.well-known/acme-challenge">http://blog.aaronlenoir.com/.well-known/acme-challenge</a></li>
</ul>
<p>There are some problems:</p>
<ul>
<li>I need that folder somewhere in my OpenShift Gear</li>
<li>By default, Ghost won't serve files from that directory.</li>
</ul>
<h3 id="gettingtheacmechallengefolderonopenshift">Getting the acme-challenge folder on OpenShift</h3>
<p>Cloning my gear repo from OpenShift, the file .openshift/action_hooks/deploy can be edited to ensure the required folders are present. The following needs to be added to the script:</p>
<pre><code>if [ ! -d &quot;$OPENSHIFT_DATA_DIR/.well-known&quot; ]; then
        mkdir -p $OPENSHIFT_DATA_DIR/.well-known
fi
if [ ! -d &quot;$OPENSHIFT_DATA_DIR/.well-known/acme-challenge&quot; ]; then
        mkdir -p &quot;$OPENSHIFT_DATA_DIR/.well-known/acme-challenge&quot;
fi
echo &quot;Symlinking Lets Encrypt dir.&quot;
rm -rf $OPENSHIFT_REPO_DIR/.well-known
ln -sf $OPENSHIFT_DATA_DIR/.well-known $OPENSHIFT_REPO_DIR/.well-known
</code></pre>
<p>Now this script will run every time I deploy an update to the Gear.</p>
<h3 id="uploadingfilestothedatadir">Uploading files to the Data Dir</h3>
<p>This can be done through an scp command. For example:</p>
<pre><code>scp SomeFileName.txt 337a20c29d20405197d0fac456d60dc4@blog-aaronlenoir.rhcloud.com:/var/lib/openshift/337a20c29d20405197d0fac456d60dc4/app-root/data/.well-known/acme-challenge
</code></pre>
<h3 id="makingghosthosttheacmechallengefolder">Making Ghost host the acme-challenge folder</h3>
<p>For this, I had to do a dirty trick. Hopefully there is a solution in later versions of Ghost.</p>
<p>First I added a setting to the .config file, to the &quot;paths&quot;:</p>
<pre><code>paths: {
    contentPath: path.join(__dirname, '/content/'),
    acmeChallenge: path.join(__dirname, '/.well-known/acme-challenge')
}
</code></pre>
<p>I added the acmeChallenge setting.</p>
<p>Secondly, I have to tell Ghost it's ok to serve this folder as static files. This is currently done in the Ghost code itself. This is really annoying, because updating Ghost will probably break this.</p>
<p>In <code>node_modules/ghost/core/server/middleware/index.js</code> add a line:</p>
<pre><code>blogApp.use('/.well-known/acme-challenge', express['static'](config.paths.acmeChallenge, {maxAge: utils.ONE_YEAR_MS}));
</code></pre>
<h3 id="runningtheletsencryptclient">Running the Let's Encrypt client</h3>
<p>In my case, I can't run the Let's Encrypt client on my actual webserver. So I'll have to run it on a local machine. Apart from that, it's very straight forward:</p>
<ul>
<li><code>git clone https://github.com/letsencrypt/letsencrypt</code></li>
<li><code>cd letsencrypt</code></li>
<li><code>./letsencrypt-auto -a manual -d blog.aaronlenoir.com</code></li>
</ul>
<p>From that point, it's a matter of following the instructions on the screen, until it asks you to upload the file to the server.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/11/2016-11-22-23_28_39-2016-11-07-23_44_09-user-debian_--_OpenShift_letsencrypt-png----Photos.png" alt></p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/11/2016-11-22-23_29_41-2016-11-07-23_46_02-user-debian_--_OpenShift_letsencrypt-png----Photos.png" alt></p>
<p>Then, the requested file can be uploaded using the command shown before (but then with the correct file):</p>
<pre><code>scp SomeFileName.txt 337a20c29d20405197d0fac456d60dc4@blog-aaronlenoir.rhcloud.com:/var/lib/openshift/337a20c29d20405197d0fac456d60dc4/app-root/data/.well-known/acme-challenge
</code></pre>
<p>After ensuring the file is accesible the following appeared:</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/11/2016-11-22-23_40_11-2016-11-08-00_00_01-user-debian_--_OpenShift_letsencrypt-png----Photos.png" alt></p>
<p>So the certificates are ready in /etc/letsencrypt/live/aaronlenoir.com</p>
<p>They are valid for three months. So every three months (or a bit earlier), I'll have to redo this process.</p>
<p>Again, this could be fully automated in many cases.</p>
<h2 id="installingthecertificate">Installing the Certificate</h2>
<p>In OpenShift's web management console, you go to the app and choose &quot;change alias&quot; again and upload a certificate:</p>
<ul>
<li>fullchain.pem for &quot;SSL Certificate&quot;</li>
<li>privkey.pem for &quot;Certificate Private Key&quot;</li>
</ul>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/11/2016-11-08-00_01_55-Alias--blog-aaronlenoir-com--_-OpenShift-Online-by-Red-Hat.png" alt></p>
<p>And click Save.</p>
<p>The result is that I was able to visit <a href="https://blog.aaronlenoir.com">https://blog.aaronlenoir.com</a> and not get certificate warnings.</p>
<h2 id="ssllabsresults">SSL Labs Results</h2>
<p>Running this through SSL Labs was encouraging:</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/11/2016-11-08-00_22_26-SSL-Server-Test_-blog-aaronlenoir-com--Powered-by-Qualys-SSL-Labs-.png" alt></p>
<h1 id="renewingthecertificate">Renewing the Certificate</h1>
<p>So after three months, the certificate expires. Because OpenShift doesn't support renewing the certificates automatically, I have to do this myself every time.</p>
<p>It is the same:</p>
<ul>
<li>Run Let's Encrypt: <code>./letsencrypt-auto -a manual certonly -d blog.aaronlenoir.com</code></li>
<li>Upload the requested file using scp</li>
<li>Upload fullchain.pem as &quot;SSL Certificate&quot; and privkey.pem as &quot;Certificate Private Key&quot;</li>
</ul>
<p>Since last time, the letsenrypt client does complain if you don't pass &quot;certonly&quot; too.</p>
<h1 id="conclusion">Conclusion</h1>
<p>Because I choose to self-host my blog, I'm forced to do some fiddling. But in general, the process was a lot less painful and cheaper than I imagined. Even if I have to get the Let's Encrypt certificates manually.</p>
<p>Now I'm set for the future, if I ever choose to move my hosting elsewhere.</p>
<p>I must urgently update to a higher version of Ghost, but that's going to be a lot more painful I think.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Parallel Linq]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>Resharper tells us to convert for(each) loops to Linq. An argument I read was that Linq queries offer options for parallelization.</p>
<p>Iterate over each item in parallel, making it faster on hardware that offers more than one processor.</p>
<p>To put this to the test I made a quick experiment.</p>]]></description><link>https://blog.aaronlenoir.com/2016/11/04/parallel-linq/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa8</guid><category><![CDATA[english]]></category><category><![CDATA[c#]]></category><category><![CDATA[linq]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Fri, 04 Nov 2016 23:59:08 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>Resharper tells us to convert for(each) loops to Linq. An argument I read was that Linq queries offer options for parallelization.</p>
<p>Iterate over each item in parallel, making it faster on hardware that offers more than one processor.</p>
<p>To put this to the test I made a quick experiment.</p>
<h1 id="theproblem">The Problem</h1>
<p>To simulate a problem, I want to compare two byte arrays and count how many items are the same.</p>
<pre><code>int Compare(byte[] dataA, byte[] dataB)
</code></pre>
<h1 id="implementations">Implementations</h1>
<p>First, the naive approach:</p>
<pre><code>static int CompareNaive(byte[] dataA, byte[] dataB)
{
    var result = 0;

    for(var i = 0; i &lt; dataA.Length; i++)
    {
        if (dataA[i] == dataB[i]) { result += 1; }
    }

    return result;
}
</code></pre>
<p>Then using Linq:</p>
<pre><code>static int CompareLinq(byte[] dataA, byte[] dataB)
{
    return dataA
            .Select((data, index) =&gt; dataB[index] == data ? true : false)
            .Count((result) =&gt; result);
}
</code></pre>
<p>I suspect there are multiple ways to do this in Linq, and I might not have taken the most efficient route.</p>
<p>Now, with Linq and some parallelization magic:</p>
<pre><code>static int CompareLinqParallell(byte[] dataA, byte[] dataB)
{
    return dataA
            .AsParallel()
            .Select((data, index) =&gt; dataB[index] == data ? true : false)
            .Count((result) =&gt; result);
}
</code></pre>
<p>The AsParallel expression enables parallelization before the Select.</p>
<h1 id="results">Results</h1>
<p>For a 1000 iterations of each function, comparing two 1 MB (1024 * 1024 bytes) arrays.</p>
<table>
 <tr>
  <th>Function</th>
  <th>Average (ms)</th>
 </tr>
 <tr>
  <td>CompareNaive</td>
  <td>0.846</td>
 </tr>
 <tr>
  <td>CompareLinqParallell</td>
  <td>3.746</td>
 </tr>
 <tr>
  <td>CompareLinq</td>
  <td>14.768</td>
 </tr>
</table>
<pre><code>CompareNaive: 4153
CompareLinq: 4153
CompareLinqParallell: 4153
To begin press return.

Starting 'CompareNaive' ...
Ran 'CompareNaive' 1000 times in 00:00:00.8461644 (average of 0,8461644 ms).
To begin press return.

Starting 'CompareLinq' ...
Ran 'CompareLinq' 1000 times in 00:00:14.7678043 (average of 14,7678043 ms).
To begin press return.

Starting 'CompareLinqParallell' ...
Ran 'CompareLinqParallell' 1000 times in 00:00:03.7464192 (average of 3,7464192 ms).
</code></pre>
<h1 id="cpuusage">CPU Usage</h1>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/11/2016-11-05-00_13_34-Broncontrole.png" alt></p>
<p>The naive approach (1) causes a little spike (around 40%) in some of the processors.</p>
<p>The regular Linq approach (2) uses quite a lot of processors for quite a long time. But not all of them.</p>
<p>The parallell Linq approach (3) takes all processors to 100%.</p>
<p>Note: this is on a 4 core machine with 8 logical processors.</p>
<h1 id="caution">Caution</h1>
<p>The AsParallel expression can't just be put anywhere. Consider these two queries:</p>
<pre><code>dataA
    .AsParallel()
    .Select((data, index) =&gt; dataB[index] == data ? true : false)
    .Count((result) =&gt; result);
</code></pre>
<pre><code>dataA
    .AsParallel()
    .Select((data, index) =&gt; dataB[index] == data ? true : false)
    .AsParallel()
    .Count((result) =&gt; result);
</code></pre>
<p>The latter is considerably slower. It takes a few seconds to execute it once for the 1 MB byte array.</p>
<p>Additionally, if the data set is really small, the overhead of AsParellel might not be worth it at all.</p>
<h1 id="measuringexecutiontime">Measuring Execution Time</h1>
<p>To be complete, here's the crude way I measure the execution times:</p>
<pre><code>static void Measure(string name, Action action, int iterations)
{
    var sw = new Stopwatch();

    Console.WriteLine($&quot;Starting '{name}' ...&quot;);

    sw.Start();
    for (int i = 0; i &lt; iterations; i++)
    {
        action();
    }
    sw.Stop();

    var averageElapsed = sw.Elapsed.TotalMilliseconds / iterations;

    Console.WriteLine($&quot;Ran '{name}' {iterations} times in {sw.Elapsed} (average of {averageElapsed} ms).&quot;);
}
</code></pre>
<h1 id="conclusion">Conclusion</h1>
<p>For this problem, the AsParallel expression will speed things up compared to a non-parallel Linq query.</p>
<p>The naive approach is the fastest and the least resource intensive. In my opinion, it's also the easiest to read.</p>
<p>Of course, it depends a lot on what you're doing in the loop. Linq is really powerful and in some cases makes your code easier to read and maintain.</p>
<p>Additionally, if the query is not executed frequently or on small data sets, there's probably no point in worrying about this.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Online Privacy: Praktische Tips]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>In Het Nieuwsblad van Zaterdag 29 oktober 2016 stond een interessant artikel over privacy, met name privacy online.</p>
<p>Als onderdeel daarvan stonden er een aantal goede tips om je privacy online te beschermen.</p>
<p>Jammer genoeg stond er niet bij hoe je die praktisch kan toepassen. Ik wil dit hier even</p>]]></description><link>https://blog.aaronlenoir.com/2016/11/01/online-privacy-praktische-tips/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa7</guid><category><![CDATA[nederlands]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Tue, 01 Nov 2016 21:56:15 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>In Het Nieuwsblad van Zaterdag 29 oktober 2016 stond een interessant artikel over privacy, met name privacy online.</p>
<p>Als onderdeel daarvan stonden er een aantal goede tips om je privacy online te beschermen.</p>
<p>Jammer genoeg stond er niet bij hoe je die praktisch kan toepassen. Ik wil dit hier even aanvullen met praktische tips.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/10/2016-11-01-00_13_51-Privacy--bestaat-dat-nog-wel_---Het-Nieuwsblad.png" alt></p>
<p><a href="https://www.nieuwsblad.be/cnt/dmf20161031_02548881">Het volledige artikel lees je hier</a> (als je een abonnement op Het Nieuwsblad hebt ...)</p>
<h1 id="wachtwoorden">Wachtwoorden</h1>
<blockquote>
<p>&quot;Gebruik krachtige wachtwoorden.&quot;</p>
</blockquote>
<p>Het belangrijkste bij een wachtwoord is de lengte. Hoe langer hoe veiliger. Elk extra karakter verhoogt de veiligheid exponentieel. Voor je belangrijkste accounts maak je best een lang wachtwoord, met ongebruikelijke woorden en enkele getallen of speciale tekens.</p>
<p>Voor een lang wachtwoord te maken dat ook makkelijk te onthouden is begin je best met een zin. Liefst een zin die niet echt logisch is, maar wel te onthouden.</p>
<ul>
<li>Bijvoorbeeld: <em>Vrolijke microgolf danst in 1980 in Vlaanderen?</em></li>
</ul>
<p>Gebruik, zoals ook in het artikel vermeld, nooit op verschillende sites hetzelfde wachtwoord. Makkelijker gezegd dan gedaan natuurlijk.</p>
<p>Eén manier is complexe wachtwoorden bij houden in een boekje. Dat is relatief veilig als je het boekje niet overal laat rond slingeren. Immers moet je al fysiek aanwezig zijn om jouw wachtwoorden te onderscheppen.</p>
<p>Iets praktischer is een &quot;Password Manager&quot;. Dat is een programma waar je in een beveiligd bestand bij houdt welke gebruikersnaam en wachtwoord je voor welke site gebruikt.</p>
<p>Het bestand is beveiligd met één <strong>sterk</strong> wachtwoord. Je moet dus maar één moeilijk wachtwoord onthouden, terwijl je toch moeilijke én unieke wachtwoorden op elke site kunt gebruiken.</p>
<p>Zelf hou ik dit moeilijker wachtwoord in mijn hoofd. Voor enkele andere belangrijke accounts hou ik deze op papier bij. Alle andere wachtwoorden zitten in een password manager.</p>
<p>Die applicaties gaan ook voor jou willekeurige complexe wachtwoorden genereren.</p>
<p>De meest eenvoudige in gebruik is &quot;<a href="https://lastpass.com/">LastPass</a>&quot;. Deze kan je op verschillende toestellen gebruiken en integreert in jouw browsers zoals Chrome en Firefox.</p>
<p>Eén opmerking, LastPass bewaart jouw beveiligd bestand op hun systeem. Dat is een vorm van vertrouwen in LastPass. Het is bovendien niet altijd een gratis service.</p>
<p>Een gratis alternatief is <a href="http://keepass.info/">KeePass</a>. Dit is een programma dat het beveiligde bestand lokaal op je PC bij houdt. Het is iets minder eenvoudig in gebruik als LastPass.</p>
<p>Een ander uitstekend alternatief is <a href="https://1password.com/">1Password</a>. Maar dat is iets moeilijker in gebruik.</p>
<h1 id="updates">Updates</h1>
<p>Als je applicaties hebt die <strong>zelf</strong> vragen om te updaten, dan doe je dit best. Doe dit <strong>NOOIT</strong> als je op een website komt en die vraag krijgt vanuit de websites.</p>
<p>Dit geldt ook voor mobiele apps op je tablet of smartphone.</p>
<p>Schakel Windows Updates in, als je Windows gebruikt. Dat is soms vervelend maar erg belangrijk.</p>
<p>Dit geldt ook voor Apple en hun mobiele platformen.</p>
<p>Ook voor Android, maar daar zijn updates verre van evident, en dat is erg jammer.</p>
<h1 id="maakgeregeldeenbackup">Maak Geregeld een Backup</h1>
<p><em><strong>Update oktober 2017</strong></em>: <em>CrashPlan heeft besloten niet langer particuliere klanten te bedienen. Ik ben overgeschakeld op Arq Backup in combinatie met B2 (Backblaze) cloud storage. De bestanden worden ook hier lokaal versleuteld.</em></p>
<p>Ik weet niet direct hoe dat je privacy ten goede komt, maar het is zeker een goede tip.</p>
<p>Een populaire vorm van cybercriminaliteit is je bestanden versleutelen en dan losgeld vragen. Dit heet &quot;Ransom Ware&quot;.</p>
<p>Bij de meeste Ransom Ware heb je zonder backups weinig keuze: betalen of je bestanden kwijt zijn.</p>
<p>Met backups heb je de kans een kopie van de oude bestanden terug te halen.</p>
<p>Klinkt simpel, kopieer af en toe alle bestanden naar een externe schijf. Maar dat is snel erg vervelend. Bovendien kan je per ongeluk de oude backup met een reeds versleutelde kopie overschrijven voor je beseft dat je slachtoffer bent van deze Ransom Ware. Dan zijn je backups waardeloos.</p>
<p>Je moet dus verschillende backups voor verschillende tijdstippen hebben.</p>
<p>Er zijn services die dit voor jou kunnen doen.</p>
<p>Ikzelf heb erg goede ervaringen met &quot;<a href="https://www.crashplan.com/en-us/">CrashPlan</a>&quot;. Deze applicatie gaat jouw bestanden online bij houden. Bovendien houdt het meerdere kopies doorheen de tijd bij. Dus zelfs als je laatste kopie versleuteld is, kan je verder terug in de tijd gaan.</p>
<p>Je kan daarenboven de bestanden eerst zelf lokaal versleutelen met een wachtwoord naar keuze. Zo worden jouw bestanden alleen versleuteld opgeslagen bij CrashPlan.</p>
<p>Opgelet, dit vereist vertrouwen in de backup service van CrashPlan. Bovendien is het niet gratis. Maar voor € 50 / jaar krijg je onbeperkte backup ruimte.</p>
<p>Er zijn zeker nog andere services (bij Apple bijvoorbeeld). Maar CrashPlan was de enige die ik had gevonden die je toelaat je bestanden lokaal te versleutelen.</p>
<h1 id="openbarewifinetwerken">Openbare WiFi Netwerken</h1>
<p>Handig als je ergens onderweg zomaar op een WiFi netwerk kan.</p>
<p>Maar opgelet, deze netwerken zijn niet altijd te vertrouwen. Zelfs als de eigenaars ter goeder trouw zijn kan het geïnfecteerd zijn met Spyware, waardoor jouw gegevens kunnen worden gestolen.</p>
<p>Het artikel vermeldt dat je er moet op letten dat de websites die je bezoekt via https gaan, zeker als je daar persoonlijke gegevens in deelt (zoals wachtwoorden).</p>
<p>Dit zie je in de adresbalk van de browser, aan het &quot;groene slotje&quot;. Hier nog even een overzicht van hoe dit eruit ziet in de verschillende browsers.</p>
<p><strong>Firefox</strong></p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/10/2016-10-30-00_59_38-Google-1.png" alt></p>
<p><strong>Chrome</strong></p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/10/2016-10-30-00_58_37-Google.png" alt></p>
<p><strong>Microsoft Internet Explorer</strong></p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/10/2016-10-30-01_01_54-Google---Internet-Explorer.png" alt></p>
<p><strong>Microsoft Edge</strong></p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/10/2016-10-30-01_02_34-Google----Microsoft-Edge.png" alt></p>
<p><strong>Brave</strong></p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/10/2016-10-30-00_59_13-Google.png" alt></p>
<p><strong>Opera</strong></p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/10/2016-10-30-01_00_22-Google---Opera.png" alt></p>
<h1 id="extra2factorauthentication">Extra: 2-Factor Authentication</h1>
<p>Dit stond niet vermeldt in het artikel, maar veel grote sites bieden je de mogelijkheid tot <a href="https://paul.reviews/the-difference-between-two-factor-and-two-step-authentication/">2 factor authentication</a>.</p>
<p>Dat betekent dat je naast je username en wachtwoord ook nog een code moet ingeven die wordt gegenereerd op basis van een geheime code die je bezit of die je via SMS wordt doorgestuurd op het moment dat je je wilt aanmelden. Bijvoorbeeld via een app zoals &quot;Google Authenticator&quot;.</p>
<p>Op die manier kan men niet op jouw account inloggen zelfs al hebben ze jouw wachtwoord.</p>
<h1 id="extrahoewetendatjewachtwoordisgestolen">Extra: Hoe weten dat je wachtwoord is gestolen?</h1>
<p>Als een website een datalek heeft dat publiek wordt gemaakt, is er een service &quot;<a href="https://haveibeenpwned.com/">Have I Been Pwnd</a>&quot;. Als je je daar inschrijft krijg je een e-mail als jouw e-mail (en meestal ook wachtwoord) in zo'n datalek optreedt.</p>
<h1 id="extraprivacyinstellingen">Extra: Privacy Instellingen</h1>
<p>Of je nu Windows, Android, iOS of iets anders gebruikt: controleer eens bij de instellingen welke informatie er zoals wordt gedeeld met de verschillende apps die je gebruikt.</p>
<p>Vaak wordt informatie gedeeld en opgeslagen over jou, terwijl dat helemaal niet nodig is.</p>
<h1 id="opgeletvertrouwen">Opgelet: Vertrouwen</h1>
<p>In alle gevallen moet je software, websites of bedrijven vertrouwen. Stel je altijd de vraag hoeveel vertrouwen je in iets wilt stellen.</p>
<p>Bijvoorbeeld, zelfs met 2 factor authenticatie: kan je de website die het aanbiedt vertrouwen om dit correct te doen, zodat het niet omzeilt kan worden? Het antwoord is &quot;niet 100%&quot;.</p>
<p>Wees, zoals het artikel ook zegt, altijd zuinig met je gegevens, zowel online als in het dagelijkse leven. Zij worden heel vaak opgeslagen en doorverkocht. Zelfs als ze niet worden verkocht is het vrij waarschijnlijk dat ze vroeg of laat gelekt worden.</p>
<h1 id="conclusie">Conclusie</h1>
<p>Het artikel had enkele goede tips maar gaf niet veel hulp om deze ook effectief toe te passen.</p>
<p>Hopelijk geeft deze post nuttige info bij de goede tips uit het artikel van Het Nieuwsblad.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Brave Adblock Payment]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>Ikzelf gebruik altijd een ad-blocker (ublock origin) op het Internet. Dit zorgt voor sneller ladende websites, minder afleiding, meer privacy en minder kans op &quot;virussen&quot;.</p>
<p>Een redelijk argument tegen ad-blockers is dat sommige websites hierdoor inkomsten mislopen. Zij bieden gratis diensten aan maar krijgen geen reclameinkomsten.</p>
<p>Er is</p>]]></description><link>https://blog.aaronlenoir.com/2016/10/26/brave-adblock-payment/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa3</guid><category><![CDATA[nederlands]]></category><category><![CDATA[brave]]></category><category><![CDATA[review]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Wed, 26 Oct 2016 21:28:16 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>Ikzelf gebruik altijd een ad-blocker (ublock origin) op het Internet. Dit zorgt voor sneller ladende websites, minder afleiding, meer privacy en minder kans op &quot;virussen&quot;.</p>
<p>Een redelijk argument tegen ad-blockers is dat sommige websites hierdoor inkomsten mislopen. Zij bieden gratis diensten aan maar krijgen geen reclameinkomsten.</p>
<p>Er is momenteel een experimentele browser (zoals Firefox, Chrome of Internet Explorer) aan het proberen een alternatief model aan te bieden: Brave.</p>
<h2 id="micropayments">Micro Payments</h2>
<p>Kort gezegd: de browser ziet welke websites je bezoekt en verdeelt een bepaald bedrag over die websites. Websites die je vaak bezoekt zullen een groter deel van de koek krijgen.</p>
<p>Je kan zelf je budget instellen dat je wilt verdelen, per maand.</p>
<p>De betalingen zouden redelijk anoniem zijn.</p>
<p>Er is geen expliciete deelname nodig van de bezochte websites. Wanneer Brave ziet dat een bepaald bedrag naar een website zou kunnen gaan en er is nog geen betaalplan, nemen zij contact op met die website zodat ze het geld kunnen over maken.</p>
<p>Opgelet, dit alles is nog in test fase.<br>
Ik wou dit graag eens uittesten. In deze post het relaas.</p>
<h2 id="stap1braveinstalleren">Stap 1: Brave Installeren</h2>
<p>Brave is gelijkaardig aan andere browsers. Het verschil zit hem vooral dat ad block, anti-tracking e.d. functionaliteiten ingebouwd zijn. Zo hoef je geen bijkomende extensies te installeren.</p>
<p>Brave is nog niet even aangenaam werken als Firefox en Chrome, moet ik wel zeggen. Maar op het moment van schrijven zaten ze ook pas aan versie 0.12.1.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-14-00_14_28-.png" alt="Brave Startscherm"></p>
<p>Bij de instellingen zijn de details te vinden om het betaalsysteem in te schakelen. Standaard staat het uitgeschakeld.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-14-00_17_13-.png" alt="Brave Payments"></p>
<h2 id="stap2bravepaymentsinschakelen">Stap 2: Brave Payments inschakelen</h2>
<p>Brave Payments werkt via &quot;Bitcoin&quot;. Dat is een vorm van elektronisch geld, gebasseerd op wiskunde en enryptie. De werking uitleggen gaat me wat ver. De basis is dat je een lokale portefeuille (&quot;wallet&quot;) hebt. In de praktijk is dit gewoon een bestandje op je computer.</p>
<p>Zo'n - lege - portfeuille wordt aangemaakt als je Brave Payments inschakelt.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-14-00_19_06-Preferences.png" alt="Brave Payments Ingeschakeld"></p>
<p>De eerste stap is om Bitcoin in de portefeuille te krijgen.</p>
<h2 id="stap3bitcoinaankopen">Stap 3: Bitcoin aankopen</h2>
<p>Er zijn verschillende diensten waarmee je Bitcoin kan aankopen. Hou er rekening mee dat Bitcoin een koers heeft, net zoals elke ander munt.</p>
<p>Brave biedt je de keuze tussen Bitcoin aankopen met een bancontact kaart of credit kaart. Alternatief kan je ook van een bestaande Bitcoin portefeuille geld opladen.</p>
<p>Ikzelf heb geen Bitcoin portfeuille, dus zal het moeten overschrijven. Brave biedt je de kans via de dienst Coinbase. Deze ontvangt je betaling en zet dit om naar Bitcoin.</p>
<p>Opgelet: hoewel Bitcoin niet meer zo nieuw is, blijft het toch een eigenaardige munteenheid en je moet een beetje vertrouwen leggen in de diensten die jouw geld naar Bitcoin zullen omzetten.</p>
<p>Wat me opvalt is dat Brave momenteel sowieso USD (US Dollars) lijkt te verwachten. Dat wil zeggen dat we met twee koersen rekening moeten houden: Euro naar USD en USD naar Bitcoin.</p>
<p>Op het moment van schrijven kwam € 1 overeen met $ 1.12 en 1 Bitcoin kwam overeen met 609.9 USD.</p>
<h3 id="obstakel1">Obstakel 1</h3>
<p>Jammer genoeg kon ik in België via Coinbase géén Bitcoin aankopen.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-14-00_33_14-Preferences.png" alt></p>
<p>Dit is eigenaardig want op de website van Coinbase kan je lezen dat dit in België wel mogelijk moet zijn.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-14-00_36_32-Coinbase-Supported-Countries---Coinbase.png" alt></p>
<p>Om hierrond te werken zal ik eerst een Bitcoin portefeuille moeten aanmaken en vanaf daar geld overschrijven naar mijn Brave portefeuille.</p>
<h2 id="stap31bitcoinportefeuilleaanmakenviacoinbase">Stap 3.1 Bitcoin Portefeuille Aanmaken via Coinbase</h2>
<p><em>Opgelet: Waarschijnlijk kan je rechtstreeks naar je Brave portefeuille storten en hoef je helemaal geen Coinbase portefeuille aan te maken.</em></p>
<p>Coinbase gaat voor je een Bitcoin portefeuille aanmaken en beheren. Een beetje zoals Paypal kan je dan geld opladen naar de portefeuille. Deze blijft wel op de site van Coinbase staan.</p>
<p>De eerste stap is je inschrijven. Coinbase vraagt je je GSM nummer te koppelen, maar dat sla ik maar over.</p>
<p>Hierna krijg je een overzicht van een lege portefeuille.</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-14-22_50_51-BTC-Wallet---Coinbase.png" alt></p>
<p>Via &quot;Settings&quot; kan je een &quot;Payment Method&quot; toevoegen op Coinbase:</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-14-22_56_11-Payment-Methods---Coinbase.png" alt></p>
<p>Hiervoor moet je je identificeren, met onder andere een foto van je identiteitskaart. Daar heb ik weinig interesse in.</p>
<h2 id="stap311bitcoinsaankopenzondercoinbase">Stap 3.1.1 Bitcoins aankopen zonder Coinbase</h2>
<p>Via Coinbase heb ik nu een &quot;portefeuille&quot;. Ik wil via een andere service geld storten op mijn portefeuille van Coinbase.</p>
<p>Via Coinbase kan ik het &quot;adres&quot; van mijn portefeuille opvragen.</p>
<p>Het is ondertussen al duidelijk dat Brave's gebruik van Bitcoin voor mensen zonder ervaring hiermee nogal een grote drempel wordt. Ik kan me voorstellen dat het in sommige landen eenvoudiger is dan in andere.</p>
<p>Via een simpele google kom ik terecht bij Bitonic.nl. Hier kan je met Mister Cash Bitcoin aankopen. Dit zou ook in België moeten werken.</p>
<ul>
<li>Ik geef 10 euro in om aan te kopen</li>
<li>Ik krijg een waarschuwing dat grote bedragen voordeliger zijn wegens transactie kosten. Ongewtijfeld. Ik vermoed dat hier betere alternatieven voor zijn.</li>
<li>Hierna selecteer is Mister Cash als betaalwijze</li>
<li>Ik geef mijn BitCoin adres in, zoals verkregen op Coinbase</li>
<li>Daarna geef ik mijn IBAN rekeningnummer in</li>
<li>Na afrekenen krijg ik een betaalformulier te zien, gehost door &quot;bancontact.pay.be&quot;</li>
<li>Verder verloopt de betaling zoals een gewone Internet betaling met bancontact</li>
<li>Ik krijg de melding dat de betaling gelukt is en dat mijn &quot;bitcoins worden verstuurd binnen ca. 1 minuut&quot;</li>
</ul>
<p>Een minuutje later zie ik bij Coinbase een melding van de transactie, met status &quot;Pending&quot; (wachtend) en een bedrag van 0.01703405 Bitcoin, wat op dit moment overeenkomt met €9.23. De transactie heeft me dus €0.77 gekost.</p>
<p>De transactie wacht op het Bitcoin netwerk om de transactie te verifiëren. Zover ik Bitcoin begrijp komt het erop neer dat iedereen die eraan deelneemt dezelfde lijst van transacties deelt. Er wordt onderling (automatisch) overeengekomen welke transacties legitiem zijn.</p>
<p>Na een half uurtje heb ik 2 bevestigingen, dus het is een kwestie van tijd. Blijkbaar vereist Coinbase 3 confirmations vooraleer de overschrijving bevestigd is.</p>
<p>Na een uurtje heb ik deze confirmations. Blijkbaar zou dit sneller gaan voor grotere bedragen.</p>
<h2 id="stap32bitcoinnaarbravestorten">Stap 3.2 Bitcoin naar Brave storten</h2>
<p>Nu er 0.01703405 Bitcoin in mijn portefeuille zitten moet ik die kunnen overschrijven naar mijn Brave portfeuille. Terug in Brave kies ik nu voor de optie om vanaf een bestaande Bitcoin portefeuille Bitcoin over te zetten (Send / Receive).</p>
<p>Met een Bitcoin app zou ik een QR code kunnen scannen, maar dat heb ik nog niet. Ik kan wel een overschrijving doen naar een Bitcoin adres:</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-15-00_13_29-Preferences.png" alt></p>
<p>Via Coinbase kan je bij &quot;Send/Receive&quot; een bitcoin adres ingeven. Je kan dan een bedrag ingeven en de overschrijving doen.</p>
<p>Opnieuw is het dan wachten tot de overschrijving compleet is.</p>
<p><strong>Het lijkt dat je hiervoor je ID niet moet verifiëren door het uploaden van je identiteitskaart! Dus zeker niet doen.</strong></p>
<p>Na twee bevestigingen zie ik in Brave mijn bedrag ($5.66) staan.</p>
<h2 id="stap4surfen">Stap 4 Surfen</h2>
<p>Je kan je maandelijks budget instellen. Elke 30 dagen wordt dat budget verdeelt over de websites die je hebt bezocht.</p>
<p>Als er websites zijn waarvan je niet wilt dat ze deelnemen (je betaalt ze bijvoorbeeld al) kan je deze uitsluiten:</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/09/2016-09-15-00_54_27-Preferences.png" alt></p>
<p>Eén opmerking na de payment, het is niet mogelijk om te zien hoeveel van je bijdrage er exact naar welke sites gaat.</p>
<p>Ik zou dit graag wat beter kunnen opvolgen.</p>
<h2 id="conclusie">Conclusie</h2>
<p>Het concept van Brave is erg interessant. Het gebruik van Bitcoin hiervoor zorgt ongetwijfeld voor een hoge drempel, zeker voor minder technische gebruikers of gebruikers zonder ervaring met Bitcoin.</p>
<p>Hopelijk krijgt het idee tractie en kan je later ook op meer praktische manieren een potje geld uitdelen in ruil voor een reclame-vrij Internet. Als genoeg mensen dit systeem willen gebruiken gaan websites hopelijk minder geneigd zijn storende ads te verspreiden en toch nog eerlijk betaald worden.</p>
<p>De vraag is of genoeg mensen genoeg bewust (!) hinder ondervinden om dit aantrekkelijk te maken. Het zal ook wel eenvoudiger moeten worden dan Bitcoin om het draagvlak te vergroten.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[FlacLibSharp Found a User]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>Some time ago I started work on a small library to read FLAC file metadata. Nice to know publishing your niche personal projects can help people.</p>
<p>I was going to use it to manage my small collection of music.</p>
<p>At first, it was all about reading the track length. I</p>]]></description><link>https://blog.aaronlenoir.com/2016/07/23/putting-your-stuff-online-helps/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa5</guid><category><![CDATA[FlacLibSharp]]></category><category><![CDATA[c#]]></category><category><![CDATA[.net]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Sat, 23 Jul 2016 21:16:26 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>Some time ago I started work on a small library to read FLAC file metadata. Nice to know publishing your niche personal projects can help people.</p>
<p>I was going to use it to manage my small collection of music.</p>
<p>At first, it was all about reading the track length. I wrote that on the train during my commute.</p>
<p>Eventually, I ended up supporting all metadata of the FLAC format.</p>
<p>I wrote it for .NET, which did not seem to have a managed library for that at the time.</p>
<p>I added support for writing metadata to FLAC files, even though I saw no use case for it myself. It was more an exercise in discipline.</p>
<p>For myself, I felt that I had to &quot;finish&quot; it completely. Meaning, support all metadata in the spec and publish the library. Not with the assumption anyone would need it, but just to finish one of my side-projects for once. I have many side-projects that equate to &quot;half a page of scribbled lines&quot;.</p>
<p>So I published <a href="https://www.nuget.org/packages/FlacLibSharp">FlacLibSharp</a> on NuGet and put the source on BitBucket and later on <a href="https://github.com/AaronLenoir/flaclibsharp">GitHub</a>.</p>
<p>I never considered anyone would have a use case for this. Supporting only FLAC metadata for .NET seems really specific.</p>
<p>Nothing much did happen.</p>
<p>Downloads on NuGet seemed to come, for the most part, from bots downloading the packages.</p>
<p>Googling the library only gave me results related to NuGet and GitHub itself.</p>
<p>It did get 6 stars on GitHub after a while.</p>
<p>Then, somebody opened an <a href="https://github.com/AaronLenoir/flaclibsharp/issues/28">issue on GitHub</a>. Noticing it was lacking support for something encouraged in the <a href="https://www.xiph.org/vorbis/doc/v-comment.html">VORBIS comment specification</a>.</p>
<p>If it was just for me, I would not have touched the library anymore. But it turns out someone was actually using it.</p>
<p>So I was more than happy to address the issue.</p>
<h2 id="thoughts">Thoughts</h2>
<p>So it turns out just putting your stuff online can help somebody. Even though you don't expect it.</p>
<p>If it wasn't for GitHub, I would've not known about it. Places like GitHub make it easy to open issues and even suggest changes in the code. It was possible before, but never this easy.</p>
<p>It's also kind of scary. You put something online, perhaps badly tested, and others start to use it.</p>
<p>What happens if they use it for something important, and it totally screws it up?</p>
<p>What happens if your project becomes popular?</p>
<p>Are you prepared? Are you responsible? Are you accountable? Do you owe the users anything?</p>
<h2 id="conclusion">Conclusion</h2>
<p>I'll put my stuff out there. If it's untested or otherwise risky, I'll try to mention that. And I'll take the risk of becoming popular, or helping someone.</p>
<p>Also, if you find a project like this useful, consider informing the creators about it. It might make their day ;-)</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Akka.NET: Creating a File Download Actor with Progress Reporting]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>In this post, I walk through the creation of a single Actor for <a href="http://getakka.net/">Akka.NET</a>. It will download files and report its progress to other Actors.</p>
<p>I'll go through all details but might skim over some of the basics of Akka.NET.</p>
<h2 id="actormodel">Actor Model</h2>
<p>Here's a short introduction.</p>
<p>Each &quot;</p>]]></description><link>https://blog.aaronlenoir.com/2016/06/21/akka-net-creating-a-download-actor/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa4</guid><category><![CDATA[english]]></category><category><![CDATA[c#]]></category><category><![CDATA[akka.net]]></category><category><![CDATA[walkthrough]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Tue, 21 Jun 2016 19:04:11 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>In this post, I walk through the creation of a single Actor for <a href="http://getakka.net/">Akka.NET</a>. It will download files and report its progress to other Actors.</p>
<p>I'll go through all details but might skim over some of the basics of Akka.NET.</p>
<h2 id="actormodel">Actor Model</h2>
<p>Here's a short introduction.</p>
<p>Each &quot;actor&quot; in the system has a clear responsibility. Actors communicate by sending and receiving <strong>immutable</strong> messages.</p>
<p>There can be many instances of a certain Actor. An Actor is cheap. They have an internal state, but <strong>do not share state</strong> with other Actors, or anything else.</p>
<p>This allows Akka.NET to run your code in parallell. At the same time the code you write in Actors runs sequentially, so you never have to worry about locks of some sort. Since the only state you change is internal and the code inside a single actor is synchronous, you will be fine.</p>
<h2 id="thedownloadactor">The Download Actor</h2>
<p>To put it in practice, we'll try to make an Actor that has one simple task: to download something from a URL.</p>
<p>It will:</p>
<ul>
<li>Accept multiple download requests (executed sequentially)</li>
<li>Run a single download at a time</li>
<li>Report start, end and progress in the form of a Message to the requestor</li>
</ul>
<h2 id="thebasics">The basics</h2>
<p>First, we create an Actor called <code>FileDownloadActor</code></p>
<pre><code>public class FileDownloadActor : ReceiveActor
{
}
</code></pre>
<p>Then we can define the messages. We'll define one incoming message <code>RequestDownload</code> and three outgoing messages <code>DownloadStarted</code>, <code>DownloadCompleted</code> and <code>DownloadProgressed</code>:</p>
<pre><code>public class FileDownloadActor : ReceiveActor
{
    #region Messages

    public class RequestDownload
    {
        public Uri Uri { get; private set; }
        public RequestDownload(Uri uri) { Uri = uri; }
    }

    public class DownloadStarted
    {
        public Uri Uri { get; private set; }
        public DownloadStarted(Uri uri) { Uri = uri; }
    }

    public class DownloadCompleted
    {
        public Uri Uri { get; private set; }
        public string Path { get; private set; }
        public DownloadCompleted(Uri uri, string path)
        {
            Uri = uri;
            Path = path;
        }
    }

    public class DownloadProgressed
    {
        public Uri Uri { get; private set; }
        public double Progress { get; private set; }
        public DownloadProgressed(Uri uri, double progress)
        {
            Uri = uri;
            Progress = progress;
        }
    }

    #endregion
}
</code></pre>
<blockquote>
<p>Note that we put the message classes inside the Actor class. This is not a requierment but I find it adds clarity.</p>
</blockquote>
<p>So everything starts with a <code>RequestDownload</code>, providing only a Uri.</p>
<p>When the downloading starts the requestor will be informed of this. The <code>DownloadProgressed</code> messages also includes the progress.</p>
<p>When the download completes, the path of the downloaded file is also provided. This is actually shared state between two Actors (the DownloadActor and the Sender). It's assumed the two actors can both access that path. But sending a large byte array in a message is bad for performance.</p>
<h2 id="keepingstate">Keeping State</h2>
<p>The <code>DownloadActor</code> will have to keep some state:</p>
<ul>
<li>The Uri that is currently being downloaded</li>
<li>The path of the file that is being written</li>
<li>A reference to the requester of the current download, which is another Actor</li>
</ul>
<p>There's some other stuff we'll need, but we can introduce that later. So we'll need a class member field for each of these items.</p>
<pre><code>public class FileDownloadActor : ReceiveActor
{
    #region State

    private Uri _currentUri;
    private string _currentTargetPath;
    private IActorRef _currentDownloadRequestor;

    #endregion

    #region Messages
    ...
    #endregion
}
</code></pre>
<h2 id="initialization">Initialization</h2>
<p>Actors are always in a certain state. The state defines . In our case the <code>DownloadActor</code> can be ready to accept new download requests, or can be downloading.</p>
<p>Here's the states defined for the <code>DownloadActor</code>:</p>
<pre><code>public class FileDownloadActor : ReceiveActor
{
    #region State

    ...

    #endregion

    #region Initialization

    public FileDownloadActor()
    {
        Become(Ready);
    }

    #endregion

    #region Messages

    ...

    #endregion

    #region States

    public void Ready()
    {
        Receive&lt;RequestDownload&gt;(message =&gt;
        {
            HandleDownloadRequest(message);
            Become(Downloading);
        });
    }

    public void Downloading()
    {
        // TODO: Handle incoming Download Requests while downloading
        // TODO: Go back to Ready state when a download completes
    }

    #endregion
}
</code></pre>
<p>I will come back to the TODO's shortly, but first let's implement the <code>HandleDownloadRequest</code> method.</p>
<pre><code>public class FileDownloadActor : ReceiveActor
{
    #region State

    ...

    #endregion

    #region Messages

    ...

    #endregion

    #region States

    ...

    #endregion

    #region Handlers

    private void HandleDownloadRequest(RequestDownload message)
    {
        _currentUri = message.Uri;
        _currentTargetPath = Path.GetTempFileName();
        _currentDownloadRequestor = Sender;

        _currentDownloadRequestor.Tell(new DownloadStarted(_currentUri));
    }

    #endregion
}
</code></pre>
<p>Note, in the message handler, we have access to the &quot;Sender&quot; property.</p>
<h2 id="doingthedownload">Doing the Download</h2>
<p>A naive implementation could be the following. Careful though, in the next steps we'll be improving a lot of things.</p>
<pre><code>public class FileDownloadActor : ReceiveActor
{
    #region State

    ...

    #endregion

    #region Messages

    ...

    #endregion

    #region States

    ...

    #endregion

    #region Handlers

    private void HandleDownloadRequest(RequestDownload message)
    {
        ...

        StartDownload();
    }

    #endregion

    #region Helper Functions

    private void StartDownload()
    {
        var client = new WebClient();

        // Careful, this is not the best way to do this!
        client.DownloadFile(_currentUri, _currentTargetPath);
        _currentDownloadRequestor.Tell(new DownloadCompleted(_currentUri, _currentTargetPath));
        Become(Ready);
    }

    #endregion
}
</code></pre>
<h2 id="testing">Testing</h2>
<p>Currently, we have a basic Download Actor, with a lot of problems still, but we have the Actor, so we could start testing it.</p>
<p>Due to the parallell nature of Akka.NET, testing in Unit Test frameworks can be a bit tricky. It's <a href="https://petabridge.com/blog/how-to-unit-test-akkadotnet-actors-akka-testkit/">not hard</a>, but it's a little different and I won't go into it here. I'll give you a hint: use Akka.Test.</p>
<p>For now, we'll just test it in a simple Console application.</p>
<pre><code>class Program
{
    static void Main(string[] args)
    {
        var system = ActorSystem.Create(&quot;downloader&quot;);

        var props = Props.Create(() =&gt; new FileDownloadActor());
        var actor = system.ActorOf(props);

        var requestDownload = new FileDownloadActor.RequestDownload(new Uri(&quot;http://getakka.net/images/akkalogo.png&quot;));
        actor.Tell(requestDownload);

        Console.ReadLine();
    }
}
</code></pre>
<h2 id="issuewiththecurrentimplementation">Issue with the current implementation</h2>
<p>As said, there are quite some issues here:</p>
<ul>
<li>How do we know when the download is completed and where the file is. Who handles the started, progress and completed messages?</li>
<li>The download action is a long-running synchronous operation. That's not a good thing in Akka. The action should be asynchronous.</li>
<li>While the actor is downloading, it ignores other incoming requests</li>
</ul>
<h2 id="creatingatestactor">Creating a TestActor</h2>
<p>To solve the first issue, we can create a simple test actor that writes something on the console for each received message. The test actor will have a reference to a <code>DownloadActor</code>:</p>
<pre><code>public class TestActor : ReceiveActor
{
    private readonly IActorRef _downloadActor;
    private readonly Uri _testUri;

    public TestActor(IActorRef downloadActor, Uri testUri)
    {
        _downloadActor = downloadActor;
        _testUri = testUri;

        Become(Ready);
    }

    public class StartTest { };

    public void Ready()
    {
        Receive&lt;StartTest&gt;(message =&gt; HandleStartTest(message));
        Receive&lt;FileDownloadActor.DownloadStarted&gt;(message =&gt; HandleDownloadStarted(message));
        Receive&lt;FileDownloadActor.DownloadProgressed&gt;(message =&gt; HandleDownloadProgressed(message));
        Receive&lt;FileDownloadActor.DownloadCompleted&gt;(message =&gt; HandleDownloadCompleted(message));
    }

    private void HandleStartTest(StartTest message)
    {
        Console.WriteLine($&quot;Starting test for Uri '{_testUri}'&quot;);
        _downloadActor.Tell(new FileDownloadActor.RequestDownload(_testUri));
    }

    private void HandleDownloadStarted(FileDownloadActor.DownloadStarted message)
    {
        Console.WriteLine($&quot;Download started Uri '{_testUri}'&quot;);
    }

    private void HandleDownloadProgressed(FileDownloadActor.DownloadProgressed message)
    {
        Console.WriteLine($&quot;Download progressed for Uri '{_testUri}': {message.Progress}&quot;);
    }

    private void HandleDownloadCompleted(FileDownloadActor.DownloadCompleted message)
    {
        Console.WriteLine($&quot;Download completed for Uri '{_testUri}'.&quot;);
    }
}
</code></pre>
<p>Usage is like so:</p>
<pre><code>class Program
{
    static void Main(string[] args)
    {
        var system = ActorSystem.Create(&quot;downloader&quot;);

        var props = Props.Create(() =&gt; new FileDownloadActor());
        var actor = system.ActorOf(props);
        var testProps = Props.Create(() =&gt; new TestActor(actor, new Uri(&quot;http://getakka.net/images/akkalogo.png&quot;)));
        var testActor = system.ActorOf(testProps);
        testActor.Tell(new TestActor.StartTest());

        Console.ReadLine();
    }
}
</code></pre>
<p>The output of this application should now be:</p>
<pre><code>Starting test for Uri 'http://getakka.net/images/akkalogo.png'.
Download started Uri 'http://getakka.net/images/akkalogo.png'.
Download completed for Uri 'http://getakka.net/images/akkalogo.png'.
</code></pre>
<h2 id="handleasynchronousdownloads">Handle Asynchronous Downloads</h2>
<p>There are several ways to make the <code>WebClient</code> do asynchornous downloads:</p>
<ul>
<li>WebClient.DownloadFileAsync: uses callbacks</li>
<li>WebClient.DownloadFileTaskAsync: use the Task Parallell Library</li>
</ul>
<p>Here, we'll use the DownloadFileAsync. For an example of using Task inside Actors, check out this example: <a href="https://github.com/petabridge/akkadotnet-code-samples/tree/master/PipeTo">Akka.NET PipeTo Sample</a>.</p>
<p>The problem with the PipeTo approach is that our async process gives feedback when it's not done yet too. Getting progress reporting to work then seemed quite hard.</p>
<p>With callbacks, it's easier. We just handle the two callbacks (progress and success).</p>
<p>A naive approach - that actually works - is the following:</p>
<pre><code>    private void StartDownload()
    {
        var client = new WebClient();

        client.DownloadProgressChanged += HandleWebClientDownloadProgressChanged;
        client.DownloadFileCompleted += HandleWebClientDownloadCompleted;
        client.DownloadFileAsync(_currentUri, _currentTargetPath);
        Become(Ready);
    }

    private void HandleWebClientDownloadProgressChanged(object sender, DownloadProgressChangedEventArgs e)
    {
        _currentDownloadRequestor.Tell(new DownloadProgressed(_currentUri, e.ProgressPercentage));
    }

    private void HandleWebClientDownloadCompleted(object sender, AsyncCompletedEventArgs e)
    {
        _currentDownloadRequestor.Tell(new DownloadCompleted(_currentUri, _currentTargetPath));
    }
</code></pre>
<blockquote>
<p>The event handlers run outside the context of the actor. Using the state could be an issue. The best implementation would be to make the handlers send a simple message to the actor itself and access the state in those message handlers.</p>
</blockquote>
<p>The output should now be something like this:</p>
<pre><code>Starting test for Uri 'http://getakka.net/images/akkalogo.png'.
Download started Uri 'http://getakka.net/images/akkalogo.png'.
Download progressed for Uri 'http://getakka.net/images/akkalogo.png': 100.
Download progressed for Uri 'http://getakka.net/images/akkalogo.png': 100.
Download completed for Uri 'http://getakka.net/images/akkalogo.png'.
</code></pre>
<p>It seems like <code>WebClient</code> fires the progress event twice. But that's not an issue.</p>
<h2 id="queueingrequests">Queueing Requests</h2>
<p>In the current implementation requests that come in when the Actor is already downloading are ignored.</p>
<p>Luckily, Akka.NET has a way to &quot;stash&quot; these messages for later use. We have to do a few steps to make this work:</p>
<ul>
<li>Implement the <code>IWithUnboundedStash</code> in the Actor</li>
<li>While downloading, push incoming <code>RequestDownload</code> messages on the stash</li>
<li>When the download is completed make the stash send the queued messages</li>
</ul>
<h3 id="implementtheiwithunboundedstashintheactor">Implement the <code>IWithUnboundedStash</code> in the Actor</h3>
<pre><code>public class FileDownloadActor : ReceiveActor, IWithUnboundedStash
{
    ...
    public IStash Stash { get; set; }
    ...
}
</code></pre>
<h3 id="whiledownloadingpushincomingrequestdownloadmessagesonthestash">While downloading, push incoming <code>RequestDownload</code> messages on the stash</h3>
<pre><code>public class FileDownloadActor : ReceiveActor, IWithUnboundedStash
{
    ...

    #region States

    ...

    public void Downloading()
    {
        Receive&lt;RequestDownload&gt;(message =&gt; Stash.Stash());
        // TODO: Go back to Ready state when a download completes
    }

    #endregion

    ...
}
</code></pre>
<h3 id="whenthedownloadiscompletedmakethestashsendthequeuedmessages">When the download is completed make the stash send the queued messages</h3>
<pre><code>public class FileDownloadActor : ReceiveActor, IWithUnboundedStash
{
    ...

    #region States

    ...

    public void Downloading()
    {
        Receive&lt;RequestDownload&gt;(message =&gt; Stash.Stash());
        Receive&lt;DownloadCompleted&gt;(message =&gt; {
            Become(Ready);
            Stash.UnstashAll();
        });
    }

    #endregion

    ...
}
</code></pre>
<p>Now, the output is something like the following, which shows the final problem:</p>
<pre><code>Starting test for Uri 'http://getakka.net/images/akkalogo.png'.
Download started Uri 'http://getakka.net/images/akkalogo.png'.
Download progressed for Uri 'http://getakka.net/images/akkalogo.png': 100.
Download progressed for Uri 'http://getakka.net/images/akkalogo.png': 100.
Download completed for Uri 'http://getakka.net/images/akkalogo.png'.
</code></pre>
<p>The Actor itself doesn't know yet when to come out of the &quot;Downloading&quot; state.</p>
<p>To do that, it has to send itself a DownloadCompleted message. A naive way - that doesn't work this time - could be the following:</p>
<pre><code>public class FileDownloadActor : ReceiveActor, IWithUnboundedStash
{
    ...

    #region Helper Functions

    ...

    private void HandleWebClientDownloadCompleted(object sender, AsyncCompletedEventArgs e)
    {
        _currentDownloadRequestor.Tell(new DownloadCompleted(_currentUri, _currentTargetPath));
        Self.Tell(new DownloadCompleted(_currentUri, _currentTargetPath));
    }

    #endregion
}
</code></pre>
<p>Running this produces the following exception:</p>
<pre><code>An unhandled exception of type 'System.NotSupportedException' occurred in Akka.dll

Additional information: There is no active ActorContext, this is most likely due to use of async operations from within this actor.
</code></pre>
<p>We cannot use the Self property, but we have to send ourselves a message somehow.</p>
<p>However we can create a field <code>_self</code> and assign it the Self property.</p>
<pre><code>public class FileDownloadActor : ReceiveActor, IWithUnboundedStash
{
    #region State

    ...

    private IActorRef _self;

    ...

    #endregion

    #region Initialization

    public FileDownloadActor()
    {
        _self = Self;
        Become(Ready);
    }

    #endregion

    ...

    #region Helper Functions

    ...

    private void HandleWebClientDownloadCompleted(object sender, AsyncCompletedEventArgs e)
    {
        _currentDownloadRequestor.Tell(new DownloadCompleted(_currentUri, _currentTargetPath));
        _self.Tell(new DownloadCompleted(_currentUri, _currentTargetPath));
    }

    #endregion
}
</code></pre>
<p>This will make the Actor able to handle multiple requests, sequentially.</p>
<h2 id="takingadvantageofactorssinglethreadednature">Taking Advantage of Actor's single threaded nature</h2>
<p>Because we know the Actor will do one download at a time, we can re-use a single instance of the WebClient class.</p>
<pre><code>public class FileDownloadActor : ReceiveActor, IWithUnboundedStash
{
    #region State

    ...

    private readonly WebClient _client = new WebClient();

    ...

    #endregion

    #region Initialization

    public FileDownloadActor()
    {
        _self = Self;
        _client.DownloadProgressChanged += HandleWebClientDownloadProgressChanged;
        _client.DownloadFileCompleted += HandleWebClientDownloadCompleted;
        Become(Ready);
    }

    #endregion

    ...

    #region Helper Functions

    private void StartDownload()
    {
        _client.DownloadFileAsync(_currentUri, _currentTargetPath);
        Become(Ready);
    }

    ...

    #endregion
}
</code></pre>
<h2 id="runninginparallell">Running in Parallell</h2>
<p>You might be thinking that this is very much not parallell. After all, the Download Actor only downloads one URI at a time.</p>
<p>That is true, but we could create a &quot;pool&quot; of such download actors and use them simultaniously. In fact, Akka.NET makes that a one-liner, as shown here:</p>
<p>In our test application we have the following code to create our Download Actor:</p>
<pre><code>    var props = Props.Create(() =&gt; new FileDownloadActor());
    var actor = system.ActorOf(props);
</code></pre>
<p>The actor is then passed to the TestActor who sends a number of download requests like so:</p>
<pre><code>    private void HandleStartTest(StartTest message)
    {
        Console.WriteLine($&quot;Starting test for Uri '{_testUri}'.&quot;);
        _downloadActor.Tell(new FileDownloadActor.RequestDownload(_testUri));
        _downloadActor.Tell(new FileDownloadActor.RequestDownload(_testUri));
        _downloadActor.Tell(new FileDownloadActor.RequestDownload(_testUri));
    }
</code></pre>
<p>Running this will show the three downloads happen in sequence.</p>
<p>To allow multiple download requests to be handled simutaniously, we can simply add a <code>WithRouter</code> statement to the Actor creation:</p>
<pre><code>        var props = Props.Create(() =&gt; new FileDownloadActor()).WithRouter(new RoundRobinPool(10));
        var actor = system.ActorOf(props);
</code></pre>
<p>This is also why IActorRef is passed around and not just and instance of actor: IActorRef can refer to one instance or even a pool of instances of an Actor, optionally spread over multiple machines when that makes sense. This is where using Actors starts to pay off. It handles some of the complexity of running things in parallell.</p>
<p>Another approach could be to create a dispatching Actor which will create one Download Actor per hostname, for example.</p>
<h2 id="openissues">Open Issues</h2>
<p>There are some things that can improve this Actor:</p>
<ul>
<li>If, for some reason, the same Uri is requested twice or more by the same Actor, it will be unclear to them which of the requests was completed. This is because they only get the Uri. Better would be if the requested would provide a unique ID to each request. For example a Guid.</li>
<li>When a download goes wrong, the Actor should inform the requester. Either by a message or by simply crashing and using Akka.NET's supervision capabilities.</li>
<li>I'm not a 100% sure of keeping a reference to Self. Since Self is an IActorRef, and IActorRef doesn't guarantee to point that instance of the Actor class I'm not sure it's fool proof.</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>With this post, I wanted to explain to you, and to myself, how to create a Download Actor that handles asynchronous calls inside itself. Also, I hope it introduces Akka.NET in a practical way.</p>
<p>I find it an enjoyable and different way to think about applications. And it takes away a lot of pain from parallell programming in .NET (&quot;threading&quot;).</p>
<p>Even if you don't create massively distributed applications, Akka.NET could be used in small scale applications, just to abstract away the technical details of running things in parallell. Which is exactly what I'll be trying with our download actor.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Integrating WPF and Akka.NET]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>I'm wondering how to combine an Akka.NET actor system with a WPF front-end.</p>
<p>The <a href="http://doc.akka.io/docs/akka/snapshot/general/actors.html">Actor Model</a> provides a nice way to build concurrent systems. There are several implementations for .NET. <a href="https://github.com/dotnet/orleans">Orleans</a> and <a href="http://getakka.net/">Akka.Net</a> are the best known.</p>
<p>As an excercise, let's build a &quot;thermostat&quot; system.</p>
<p>I'll</p>]]></description><link>https://blog.aaronlenoir.com/2016/04/20/integrating-wpf-and-akka-net-2/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa1</guid><category><![CDATA[english]]></category><category><![CDATA[windows]]></category><category><![CDATA[c#]]></category><category><![CDATA[wpf]]></category><category><![CDATA[akka.net]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Wed, 20 Apr 2016 23:32:47 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>I'm wondering how to combine an Akka.NET actor system with a WPF front-end.</p>
<p>The <a href="http://doc.akka.io/docs/akka/snapshot/general/actors.html">Actor Model</a> provides a nice way to build concurrent systems. There are several implementations for .NET. <a href="https://github.com/dotnet/orleans">Orleans</a> and <a href="http://getakka.net/">Akka.Net</a> are the best known.</p>
<p>As an excercise, let's build a &quot;thermostat&quot; system.</p>
<p>I'll leave out some implementation details so we can focus on the integration.</p>
<h2 id="thethermostatsystem">The Thermostat System</h2>
<p>The system will display both the current and the desired temperature to the user.</p>
<p>It'll have two buttons to increase and decrease the desired temperature.</p>
<p>In the background, the system should control the heating but we'll ignore that bit here.</p>
<p>It'll look like this:</p>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/04/2016-04-21-00_57_49-Thermostat--WPF-and-Akka-NET-Excercise-.png" alt></p>
<h3 id="ui">UI</h3>
<p>We'll create a basic WPF application with MVVM. The ViewModel will have a property for the current temperature and the target temperature.</p>
<p>It also accepts two commands: <code>IncreaseTargetTemperature</code> and <code>DecreaseTargetTemperature</code>.</p>
<h3 id="backend">Back-end</h3>
<p>The back-end should have a thermometer that takes measurements on a regular basis.</p>
<p>It should also keep track of the target temperature, set by the user.</p>
<p>The target temperature can change by two messages sent to the system:</p>
<ul>
<li>Increase the target temperature</li>
<li>Reduce the target temperature</li>
</ul>
<p>As output, the system should emit:</p>
<ul>
<li>The current temperature, when a measurement takes place</li>
<li>The target temperature, when it's changed</li>
</ul>
<h2 id="thermostatactorsystem">Thermostat Actor System</h2>
<h3 id="actors">Actors</h3>
<p>This simple actor system can consist of two Actors:</p>
<ul>
<li>TemperatureSensorActor</li>
<li>ThermostatActor</li>
</ul>
<h3 id="temperaturesensor">TemperatureSensor</h3>
<p>The TemperatureSensor sends a message <code>TemperatureMeasured</code> to its Parent actor. This happens as a response to a <code>TakeMeasurement</code> message.</p>
<p>The TemperatureSensor sends the <code>TakeMeasurement</code> to itself on a fixed interval.</p>
<h3 id="thermostat">Thermostat</h3>
<p>The Thermostat is in charge of creating a TemperatureSensor.</p>
<p>It's a <a href="https://github.com/petabridge/akka-bootcamp/blob/0ac15bdc4dbe54f9169e8e6e026bf0ec28e8f2a2/src/Unit-2/lesson3/README.md#how-do-i-do-pubsub-with-akkanet-actors">&quot;pub-sub&quot;</a> actor that should send to its subscribers the following messages:</p>
<ul>
<li><code>TargetTemperatureSet</code></li>
<li><code>TemperatureMeasured</code></li>
</ul>
<h2 id="bridge">Bridge</h2>
<p>With the UI and the actor system ready, we can start to make them talk with each other.</p>
<blockquote>
<p>The code we start with is on <a href="https://github.com/AaronLenoir/AkkaAndWpfExample/tree/master">github</a>. If you want to try this yourself first, you could fork it. If not, just read on to see how I do it.</p>
</blockquote>
<p>To support this, WPF should react to messages sent by the Actors. At the same time, WPF should be able to send messages to the Actor System, in some way.</p>
<p>To support this communication we could create a &quot;Bridge&quot;. The Bridge will contain:</p>
<ul>
<li>A Bridge Actor</li>
<li>One or more public functions</li>
</ul>
<p>The Bridge Actor will have a reference to the ViewModel. The ViewModel will have a reference to the Bridge (not the Bridge Actor!).</p>
<p>The Bridge can expose methods the ViewModel can call which pass messages to the Actor System. For example, a method <code>IncreaseTargetTemperature</code>.</p>
<p>The Bridge Actor calls functions exposed by the ViewModel. These will update the ViewModel's properties. The actor does this only while handling messages.</p>
<h3 id="bridgeinterfaces">Bridge Interfaces</h3>
<p>To avoid exposing the entire ViewModel to the Bridge Actor, an interface seems like a good idea. And an interface to abstract away the Bridge passed to the ViewModel would also help.</p>
<p>In short, we'll have two Interfaces. One implemented by the ViewModel, the other by the Bridge itself.</p>
<p>We can define these two interfaces:</p>
<ul>
<li><code>IThermostatView</code>
<ul>
<li><code>UpdateCurrentTemperature</code></li>
<li><code>UpdateTargetTemperature</code></li>
</ul>
</li>
<li><code>IThermostatBridge</code></li>
<li><code>IncreaseTargetTemperature</code></li>
<li><code>DecreaseTargetTemperature</code></li>
</ul>
<p>The Bridge Actor references <code>IThermostatView</code> while the ViewModel implements it.</p>
<p>The ViewModel references <code>IThermostatBridge</code> while the Bridge implements it.</p>
<h2 id="implementation">Implementation</h2>
<h3 id="views">Views</h3>
<p>The views, as discussed above, are simple:</p>
<pre><code>// Implemented by the Bridge
public interface IThermostatBridge
{
    void IncreaseTargetTemperature();
    void DecreaseTargetTemperature();
}
</code></pre>
<pre><code>// Implemented by the ViewModel (in the WPF project)
public interface IThermostatView
{
    void UpdateCurrentTemperature(double currentTemperature);
    void UpdateTargetTemperature(double targetTemperature);
}
</code></pre>
<h3 id="bridgeactor">Bridge Actor</h3>
<p>The Bridge Actor is small, just passing messages around. It does have two dependencies.</p>
<pre><code>public class BridgeActor : ReceiveActor
{
    private IThermostatView _thermostatView;
    private IActorRef _thermostatActor;

    public BridgeActor(IThermostatView thermostatView, IActorRef thermostatActor)
    {
        _thermostatView = thermostatView;
        _thermostatActor = thermostatActor;
        Become(Active);
    }

    public void Active()
    {
        Receive&lt;TemperatureMeasured&gt;(message =&gt; _thermostatView.UpdateCurrentTemperature(message.Temperature));
        Receive&lt;TargetTemperatureSet&gt;(message =&gt; _thermostatView.UpdateTargetTemperature(message.TargetTemperature));
        Receive&lt;IncreaseTargetTemperature&gt;(message =&gt; _thermostatActor.Tell(message));
        Receive&lt;DecreaseTargetTemperature&gt;(message =&gt; _thermostatActor.Tell(message));
    }
}
</code></pre>
<h3 id="bridge">Bridge</h3>
<p>The bridge will implement the <code>IThermostatBridge</code> interface:</p>
<pre><code>public class ThermostatBridge : IThermostatBridge
{
    private IActorRef _bridgeActor;

    private readonly IncreaseTargetTemperature increaseMessage = new IncreaseTargetTemperature(1);
    private readonly DecreaseTargetTemperature decreaseMessage = new DecreaseTargetTemperature(1);

    public ThermostatBridge(IActorRef bridgeActor)
    {
        _bridgeActor = bridgeActor;
    }

    public void IncreaseTargetTemperature()
    {
        _bridgeActor.Tell(increaseMessage);
    }

    public void DecreaseTargetTemperature()
    {
        _bridgeActor.Tell(decreaseMessage);
    }
}
</code></pre>
<h3 id="systemcreation">System Creation</h3>
<p>I already had a seperate class to create the Actor System. For the excercise, I'll create the system when the WPF app starts. But this might not be the ideal place.</p>
<pre><code>public partial class App : Application
{
    private static WpfAkkaIntegration.ThermostatSystem.ThermostatSystem _system = new WpfAkkaIntegration.ThermostatSystem.ThermostatSystem();

    public static WpfAkkaIntegration.ThermostatSystem.ThermostatSystem ThermostatSystem =&gt; _system;
}
</code></pre>
<p>In that class, there's a function <code>CreateThermostatBridge</code> which can create our Bridge. Here's the full ThermostatSystem class:</p>
<pre><code>public class ThermostatSystem
{
    private ActorSystem _system;
    private IActorRef _thermostatActor;

    public ThermostatSystem()
    {
        _system = ActorSystem.Create(nameof(ThermostatSystem));
        _thermostatActor = CreateThermostatActor();
    }

    private IActorRef CreateThermostatActor()
    {
        var props = Props.Create&lt;Actors.ThermostatActor&gt;();
        return _system.ActorOf(props, &quot;thermostat&quot;);
    }

    public IThermostatBridge CreateThermostatBridge(IThermostatView thermostatView)
    {
        var bridgeActor = CreateBridgeActor(thermostatView);
        _thermostatActor.Tell(new Subscribe(bridgeActor));

        return new ThermostatBridge(bridgeActor);
    }

    private IActorRef CreateBridgeActor(IThermostatView thermostatView)
    {
        var props = Props.Create(() =&gt; new BridgeActor(thermostatView, _thermostatActor))
            .WithDispatcher(&quot;akka.actor.synchronized-dispatcher&quot;);
        return _system.ActorOf(props, &quot;bridge&quot;);
    }
}
</code></pre>
<h4 id="synchronizeddispatcher">Synchronized Dispatcher</h4>
<p>The code above creates the BridgeActor using the <a href="http://getakka.net/docs/working-with-actors/Dispatchers#synchronizeddispatcher">synchronized dispatcher</a>.</p>
<p>This makes that actor run on the UI-thread, so calling methods on the ViewModel should be safe.</p>
<p>From the <a href="http://getakka.net/docs/working-with-actors/Dispatchers#synchronizeddispatcher">akka.net docs</a>:</p>
<blockquote>
<p>You may use this dispatcher to create actors that update UIs in a reactive manner. An application that displays real-time updates of stock prices may have a dedicated actor to update the UI controls directly for example.</p>
</blockquote>
<blockquote>
<p><strong>Note:</strong> As a general rule, actors running in this dispatcher shouldn't do much work. Avoid doing any extra work that may be done by actors running in other pools.</p>
</blockquote>
<p><strong>Update April 27, 2016</strong>: The synchronized dispatcher is not required. All updates of the UI go through property changes on the ViewModel. The data-binding mechanism will take care of executing this on the UI thread. Read more in this blog post &quot;<a href="https://dontcodetired.com/blog/post/AkkaNET-Dispatchers-and-User-Interface-Thread-Access.aspx">Akka.NET Dispatchers and User Interface Thread Access</a>&quot; by <a href="https://twitter.com/robertsjason">Jason Roberts</a>.</p>
<h3 id="bridgecreation">Bridge Creation</h3>
<p>Finally we have everything to hook it all up in the ViewModel.</p>
<pre><code>public MainViewModel()
{
    _bridge = App.ThermostatSystem.CreateThermostatBridge(this);

    IncreaseTargetTemperature = new RelayCommand(() =&gt; _bridge.IncreaseTargetTemperature());
    DecreaseTargetTemperature = new RelayCommand(() =&gt; _bridge.DecreaseTargetTemperature());
}
</code></pre>
<p>You can also see the implementation of the two commands the view can receive.</p>
<p>I do this in the constructor of the ViewModel for this excercise. In real life, I suppose dependency injection would be adviceable. But then the ViewModel would still need to assign itself to the Bridge afterwards.</p>
<h2 id="result">Result</h2>
<p><img src="https://blog.aaronlenoir.com/content/images/2016/04/2016-04-21-00_57_49-Thermostat--WPF-and-Akka-NET-Excercise-.png" alt></p>
<p>Running the app gives me this WPF app that updates its view based on messages that occur in the Actor System.</p>
<p>Additionally, the view triggers changes to the system by sending it messages. It does this when it receives a command.</p>
<p>When you click a button in WPF, all it does is ask the bridge to increase the target temperature. The Actor System will then process that message. When it's done, the Actor System forwards a new message back to WPF so it can update the view.</p>
<p>This is all asynchronous, because of the actor system.</p>
<p>In the example, the current temperature only goes up. This is because I did not simulate an actual room that heats. I decided to just let the temperature increase at every measurement taken. Just to show the temperature is changing.</p>
<h3 id="code">Code</h3>
<p>The example project is on github:</p>
<ul>
<li><a href="https://github.com/AaronLenoir/AkkaAndWpfExample/tree/bridged">Finished</a></li>
<li><a href="https://github.com/AaronLenoir/AkkaAndWpfExample/tree/master">Before creating the bridge</a></li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>It's possible to loosely couple a WPF app with an Akka.NET Actor System. I used something I called a Bridge, <a href="https://web.archive.org/web/20160420223239/http://martinfowler.com/bliki/TwoHardThings.html">but there must be a better name</a>.</p>
<p>Something I don't like here is that the Bridge and Bridge Actor should be more generic. It mentions &quot;Thermostat&quot; everywhere.</p>
<p>Do you think this is a reasonable way to put a WPF front-end on an Akka.NET Actor System? Let me know in the comments, or on <a href="https://www.twitter.com/lenoir_aaron">twitter</a> or something.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Detect System Idle in Windows Applications]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>I maintain a <a href="https://github.com/AaronLenoir/activity-logger">small tool</a> to track my own activities through the day. For this, I wanted a feature to automatically log when I was &quot;idle&quot;. So I'd know how long &quot;that meeting&quot; actually took.</p>
<p>Here's how I did that.</p>
<h3 id="thecode">The Code</h3>
<pre><code>[DllImport(&quot;user32.dll&</code></pre>]]></description><link>https://blog.aaronlenoir.com/2016/02/16/detect-system-idle-in-windows-applications-2/</link><guid isPermaLink="false">5d912fbe4cd92f0001765aa0</guid><category><![CDATA[english]]></category><category><![CDATA[windows]]></category><category><![CDATA[c#]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Tue, 16 Feb 2016 00:13:04 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>I maintain a <a href="https://github.com/AaronLenoir/activity-logger">small tool</a> to track my own activities through the day. For this, I wanted a feature to automatically log when I was &quot;idle&quot;. So I'd know how long &quot;that meeting&quot; actually took.</p>
<p>Here's how I did that.</p>
<h3 id="thecode">The Code</h3>
<pre><code>[DllImport(&quot;user32.dll&quot;)]
static extern bool GetLastInputInfo(ref LASTINPUTINFO plii);

[StructLayout(LayoutKind.Sequential)]
struct LASTINPUTINFO
{
   public static readonly int SizeOf = Marshal.SizeOf(typeof(LASTINPUTINFO));

   [MarshalAs(UnmanagedType.U4)]
   public UInt32 cbSize;
   [MarshalAs(UnmanagedType.U4)]
   public UInt32 dwTime;
}

private TimeSpan RetrieveIdleTime()
{
    LASTINPUTINFO lastInputInfo = new LASTINPUTINFO();
    lastInputInfo.cbSize = (uint)LASTINPUTINFO.SizeOf;
    GetLastInputInfo(ref lastInputInfo);
        
    int elapsedTicks = Environment.TickCount - (int)lastInputInfo.dwTime;

    if (elapsedTicks &gt; 0) { return new TimeSpan(0, 0, 0, 0, elapsedTicks); }
    else { return new TimeSpan(0); }
}
</code></pre>
<p>Be careful with &quot;long&quot; uptimes larger than 24 days (see further).</p>
<h3 id="details">Details</h3>
<p>To detect if a user has been idle, you'll have to make a call to the native win 32 function called <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/ms646302%28v=vs.85%29.aspx"><code>GetLastInputInfo</code></a>.</p>
<p>It will give you the tick count at the last user input. Careful, that's not &quot;the amount of ticks the user has been idle&quot;. You'll have to compare against the current tickcount to get to that value.</p>
<p>First thing to do is import the function:</p>
<pre><code>[DllImport(&quot;user32.dll&quot;)]
static extern bool GetLastInputInfo(ref LASTINPUTINFO plii);
</code></pre>
<p>The <a href="http://pinvoke.net/default.aspx/Structures/LASTINPUTINFO.html"><code>LASTINPUTINFO</code></a> struct will have to be defined as well:</p>
<pre><code>[StructLayout(LayoutKind.Sequential)]
struct LASTINPUTINFO
{
    public static readonly int SizeOf = Marshal.SizeOf(typeof(LASTINPUTINFO));

    [MarshalAs(UnmanagedType.U4)]
    public UInt32 cbSize;
    [MarshalAs(UnmanagedType.U4)]
    public UInt32 dwTime;
}
</code></pre>
<p>And then you'll need some boilerplate code to actually fill up that structure:</p>
<pre><code>LASTINPUTINFO lastInputInfo = new LASTINPUTINFO();
lastInputInfo.cbSize = (uint)LASTINPUTINFO.SizeOf;
GetLastInputInfo(ref lastInputInfo);

int elapsedTicks = Environment.TickCount - (int)lastInputInfo.dwTime;
TimeSpan idleTime = new TimeSpan(0, 0, 0, 0, elapsedTicks);
</code></pre>
<p>Don't forget to manually set the cbSize value.</p>
<p>The definition of <code>LASTINPUTINFO</code> comes straight from &quot;pinvoke.net&quot;, a great website. However, it's always good to know what you're doing.</p>
<p>The <code>[StructLayout(LayoutKind.Sequential)]</code> attribute says that the information must be <a href="https://msdn.microsoft.com/library/system.runtime.interopservices.structlayoutattribute%28v=vs.100%29.aspx">laid out in memory just as it appears</a>. This goes for the object both in managed and unmanaged memory. This is because the win32 native functions depend on the layout of the struct in memory, being native functions (please <a href="https://www.twitter.com/lenoir_aaron">correct me</a> if I'm wrong here).</p>
<p>Marshal.SizeOf returns the size of the unmanaged type in memory. We can then use that value to set the cbSize property.</p>
<p>Also, both cbSize and dwTime are strictly defined as unsigned, 32-bit integers. That's also what the <code>MarshalAs(UnmanagedType.U4)</code> attribute indicates.</p>
<h3 id="uptimeissue">Uptime Issue</h3>
<p>One thing to think about: the &quot;tickcount&quot; in windows is the amount of milliseconds since the system started. We're dealing with an unsigned 32-bit integer, which means it can represent up to 50 days before it wraps around.</p>
<p>Normally you work around this by calling GetTickCount64, but there doesn't seem to be anything for GetLastInputInfo. In any case, if you're converting to a TimeSpan (which takes Ticks as a signed 32 bit integer), you'll have trouble after about 24 days.</p>
<p>That is to say, the above will fail if the system is up for a long time.</p>
<p>Does anyone know a way around this?</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Automatically reset desktop]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>When using Windows, I always end up with a messy desktop. Temporary files, useless shortcuts, small notes, an &quot;OldIcons&quot; folder ...</p>
<p>What I would like is a way for my desktop to reset itself every day. The icons that really have to be there should be there, but all</p>]]></description><link>https://blog.aaronlenoir.com/2015/11/12/automatically-reset-windows-desktop-every-day/</link><guid isPermaLink="false">5d912fbe4cd92f0001765a9f</guid><category><![CDATA[english]]></category><category><![CDATA[script]]></category><category><![CDATA[reminder]]></category><category><![CDATA[windows]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Thu, 12 Nov 2015 00:34:38 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>When using Windows, I always end up with a messy desktop. Temporary files, useless shortcuts, small notes, an &quot;OldIcons&quot; folder ...</p>
<p>What I would like is a way for my desktop to reset itself every day. The icons that really have to be there should be there, but all the other junk can be removed.</p>
<p>It's relatively easy to do in Windows, without third party software.</p>
<h2 id="solution">Solution</h2>
<p>The goal is to use the &quot;Public Desktop&quot; as the template of what the desktop should look like. Everything on the personal desktop is removed at log on.</p>
<p>In Windows, your desktop shows the icons from the Public Desktop <code>C:\Users\Public\Desktop</code>, and from your user <code>C:\Users\user\Desktop</code>.</p>
<p>So all you have to do is clear <code>C:\Users\user\Desktop</code> at startup.</p>
<h3 id="step1thescript">Step 1: The script</h3>
<p>Make sure the directory <code>C:\Users\Default\Desktop</code> exists and is empty.</p>
<p>Create a file clearDesktop.cmd:</p>
<pre><code>@echo off
robocopy &quot;C:\Users\Default\Desktop&quot; &quot;C:\Users\user\Desktop&quot; /Z /E /MIR
</code></pre>
<p>For some reason, clearing all files and folders from a folder is complicated. As seen in this <a href="http://stackoverflow.com/questions/6836566/batch-file-delete-all-files-and-folders-in-a-directory">stackoverflow question</a>.</p>
<p>It's easy using robocopy, but you need an empty folder first. You then mirror that empty folder to the folder for which you want to remove the content.</p>
<p>Additionally, you could add some files to the &quot;Default&quot; desktop folder. That way they are also added to the user desktop.</p>
<h3 id="step2createthescheduledtask">Step 2: Create The Scheduled Task</h3>
<p>You can make the script run at system startup.</p>
<p>In windows, go to &quot;Start&quot; and type &quot;Task Scheduler&quot;. Start the Task Scheduler.</p>
<p>Add a new task, with:</p>
<ul>
<li>Trigger: At system startup</li>
<li>Action: &quot;Start a program&quot;</li>
<li>And pass the full path of the clearDesktop.cmd script.</li>
</ul>
<h3 id="test">Test</h3>
<p>First run the script, and make sure you're pointing to the correct folders! See your files dissapear from the desktop.</p>
<p>Now put a new file on your desktop and reboot. The file should be gone now.</p>
<h3 id="conclusion">Conclusion</h3>
<p>I always make weird scripts to do the thing I want. I always forget how I did it, so I wrote it down here.</p>
<p>That's my conclusion.</p>
<p>Another conclusion could be: you don't need to download third party tools for everything.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Podcast List]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p><sup>Latest update: <em>28 september 2019: I listen to a lot less podcasts and a lot more music now-a-days.</em></sup></p>
<p>I have a long commute and very bad radio shows. So in the car, I like to listen to podcasts.</p>
<p>Here are some of my favorites.</p>
<p>TL;DR</p>
<ul>
<li><strong><a href="https://www.codenewbie.org/podcast">Code Newbie</a></strong> <a href="http://feeds.codenewbie.org/cnpodcast.xml">RSS Feed</a></li></ul>]]></description><link>https://blog.aaronlenoir.com/2015/10/24/podcast-list/</link><guid isPermaLink="false">5d912fbe4cd92f0001765a9e</guid><category><![CDATA[english]]></category><category><![CDATA[podcast]]></category><category><![CDATA[commuting]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Sat, 24 Oct 2015 23:35:33 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p><sup>Latest update: <em>28 september 2019: I listen to a lot less podcasts and a lot more music now-a-days.</em></sup></p>
<p>I have a long commute and very bad radio shows. So in the car, I like to listen to podcasts.</p>
<p>Here are some of my favorites.</p>
<p>TL;DR</p>
<ul>
<li><strong><a href="https://www.codenewbie.org/podcast">Code Newbie</a></strong> <a href="http://feeds.codenewbie.org/cnpodcast.xml">RSS Feed</a></li>
<li><strong><a href="https://www.codenewbie.org/basecs">Base.cs</a></strong> <a href="http://feeds.codenewbie.org/basecs_podcast.xml">RSS Feed</a></li>
<li><strong><a href="https://risky.biz/">Risky Business</a></strong> <a href="https://risky.biz/feeds/risky-business/">RSS Feed</a></li>
<li><strong><a href="https://www.dotnetrocks.com/">.NET Rocks</a></strong> <a href="http://www.pwop.com/feed.aspx?show=dotnetrocks&amp;filetype=master">RSS Feed</a></li>
<li><strong><a href="https://www.schneier.com/crypto-gram/">Crypto Gram Newsletter</a></strong> <a href="http://dhenage.libsyn.com/rss">RSS Feed</a></li>
<li><strong><a href="http://hanselminutes.com/">Hanselminutes</a></strong> <a href="http://feeds.podtrac.com/9dPm65vdpLL1">RSS Feed</a></li>
<li><strong><a href="http://securityweekly.com/">Security Weekly</a></strong> <a href="http://securityweekly.com/podcast/psw.xml">RSS Feed</a></li>
<li><strong><a href="https://www.troyhunt.com/tag/weekly-update/">Troy Hunt's Weekly Update</a></strong> <a href="http://www.omnycontent.com/d/playlist/1439345f-6152-486d-a9c2-a6bf0067f2b7/3ba9af7f-3bfb-48fd-aae7-a6bf00689c10/fde26e49-9fb8-457d-8f16-a6bf00696676/podcast.rss">RSS Feed</a></li>
<li><strong><a href="http://herdingcode.com/">Herding Code</a></strong> <a href="https://feeds.feedburner.com/HerdingCode?format=xml">RSS Feed</a></li>
<li><strong><a href="http://www.se-radio.net/">Software Engineering Radio</a></strong> <a href="https://feeds.feedburner.com/se-radio?format=xml">RSS Feed</a></li>
<li><strong><a href="https://softwareengineeringdaily.com/">Software Engineering Daily</a></strong> <a href="http://softwareengineeringdaily.com/feed/podcast/">RSS Feed</a></li>
<li><strong><a href="https://danielmiessler.com/podcast/">Unsupervised Learning</a></strong> <a href="http://www.omnycontent.com/d/playlist/070af456-729b-4a0f-9c09-a6c100397b59/3b159371-276d-429e-ae86-a6c1003b01c4/7b61d4e1-bd3d-4d3f-97c2-a6c1003b01c9/podcast.rss">RSS Feed</a></li>
<li><strong><a href="https://changelog.com/">The Changelog</a></strong> <a href="http://feeds.5by5.tv/changelog">RSS Feed</a></li>
<li><strong><a href="http://webdevradio.com/">Web Dev Radio</a></strong> <a href="http://webdevradio.com/feed/">RSS Feed</a></li>
<li><strong><a href="http://www.startalkradio.net/">Starktalk Radio</a></strong> <a href="http://feeds.soundcloud.com/users/soundcloud:users:38128127/sounds.rss">RSS Feed</a></li>
<li><strong><a href="http://www.nature.com/">Nature</a></strong> <a href="http://feeds.nature.com/nature/podcast/current?format=xml">RSS Feed</a></li>
<li><strong><a href="http://www.physicscentral.com/">Physics Central Podcast</a></strong> <a href="http://www.physicscentral.com/about/feed/podcasts.cfm?outputXML">RSS Feed</a></li>
<li><strong><a href="http://www.scientificamerican.com/podcast/science-talk/">Scientific American Science Talk</a></strong> <a href="http://rss.sciam.com/sciam/science-talk?format=xml">RSS Feed</a>.</li>
<li><strong><a href="http://rss.sciam.com/sciam/60secsciencepodcast">60 Seconds Science</a></strong> <a href="http://rss.sciam.com/sciam/60secsciencepodcast?format=xml">RSS Feed</a></li>
<li><strong><a href="http://rss.sciam.com/sciam/60-second-space">60 Seconds Space</a></strong> <a href="http://rss.sciam.com/sciam/60-second-space?format=xml">RSS Feed</a></li>
<li><strong><a href="http://rss.sciam.com/sciam/60-second-tech">60 Seconds Tech</a></strong> <a href="http://rss.sciam.com/sciam/60-second-tech?format=xml">RSS Feed</a></li>
<li><strong><a href="http://rss.sciam.com/sciam/60-second-earth">60 Seconds Earth</a></strong> <a href="http://rss.sciam.com/sciam/60-second-earth?format=xml">RSS Feed</a></li>
<li><strong><a href="http://rss.sciam.com/sciam/60-second-health">60 Seconds Health</a></strong> <a href="http://rss.sciam.com/sciam/60-second-health?format=xml">RSS Feed</a></li>
<li><strong><a href="http://rss.sciam.com/sciam/60-second-mind">60 Seconds Mind</a></strong> <a href="http://rss.sciam.com/sciam/60-second-mind?format=xml">RSS Feed</a></li>
<li><strong><a href="http://thisdeveloperslife.com/">This Developer's Life</a></strong> <a href="https://feeds.feedburner.com/ThisDevelopersLife?format=xml">RSS Feed</a></li>
<li><strong><a href="http://www.radiolab.org/">Radiolab</a></strong> <a href="http://feeds.wnyc.org/radiolab?format=xml">RSS Feed</a></li>
<li><strong><a href="https://www.thecipherbrief.com/podcasts/intelligence-matters">Intelligence Matters</a></strong> <a href="https://intelligencematters.libsyn.com/rss">RSS Feed</a></li>
<li><strong><a href="https://pilottopilot.libsyn.com/">Pilot to Pilot</a></strong> <a href="https://pilottopilot.libsyn.com/rss">RSS Feed</a></li>
</ul>
<p>Update: here's <a href="https://twitter.com/sleepeasysw">someone else</a>'s <a href="http://www.sleepeasysoftware.com/11-podcasts-that-will-make-you-a-better-software-engineer/">list</a> too. Via Daniel Kaplan.</p>
<h2 id="programmingandit">Programming and IT</h2>
<p><strong><a href="https://www.codenewbie.org/podcast">Code Newbie</a></strong></p>
<p><a href="http://feeds.codenewbie.org/cnpodcast.xml">RSS Feed</a></p>
<p>Target people new to the world of software development, in an approachable, inclusive and entertaining manner. But it's useful for everybody, because in this field you're always a beginner at something. And if there's a topic you already know very well, it helps if you hear people like (Saron)[<a href="https://twitter.com/saronyitbarek">https://twitter.com/saronyitbarek</a>] explain it in such clear terms.</p>
<p><strong><a href="https://www.codenewbie.org/basecs">Base.cs</a></strong></p>
<p><a href="http://feeds.codenewbie.org/basecs_podcast.xml">RSS Feed</a></p>
<p>Base.CS is a podcast mash-up of <a href="https://medium.com/@vaidehijoshi">Vaidehi Joshi's</a> <a href="https://medium.com/basecs">BaseCS blog series</a> and the Code Newbie podcast, with <a href="https://twitter.com/saronyitbarek">Saron Yitbarek</a>.</p>
<p><strong><a href="https://www.dotnetrocks.com/">.NET Rocks</a></strong></p>
<p><a href="http://www.pwop.com/feed.aspx?show=dotnetrocks&amp;filetype=master">RSS Feed</a></p>
<p>Carl Frankling and Richard Campbell interview a diverse set of guests about software development. Don't let the name of the podcast fool you. The topics go <a href="https://www.dotnetrocks.com/?show=1159">way</a> <a href="https://www.dotnetrocks.com/?show=1187">beyond</a> <a href="https://www.dotnetrocks.com/?show=1154">.NET</a>.</p>
<p>Every month or so they have a <em>Geek Out</em>, where they go deep into a topic that's not directly related to software.</p>
<p><strong><a href="https://www.schneier.com/crypto-gram/">Crypto Gram Newsletter</a></strong></p>
<p><a href="http://dhenage.libsyn.com/rss">RSS Feed</a></p>
<p>This is a spoken version of the <a href="https://www.schneier.com/crypto-gram/">Crypto Gram Newsletter</a> by <a href="https://www.schneier.com/">Bruce Schneier</a>.</p>
<p>It handles topics like information security, privacy online, Cryptography, etc ... It's mostly Schneier's blog.</p>
<p><strong><a href="http://hanselminutes.com/">Hanselminutes</a></strong></p>
<p><a href="http://feeds.podtrac.com/9dPm65vdpLL1">RSS Feed</a></p>
<p>Scott Hanselman interviews various people about a <a href="http://www.hanselminutes.com/494/jetcom-scales-with-azure-f-and-more-with-rachel-reese">wide</a> <a href="http://www.hanselminutes.com/301/learning-to-speak-another-language-with-zach-owens">range</a> of <a href="http://www.hanselminutes.com/465/march-is-for-makers-3d-printing-with-printrbots-brook-drumm">&quot;geeky&quot;</a> topics.</p>
<p><strong><a href="http://securityweekly.com/">Security Weekly</a></strong></p>
<p><a href="http://securityweekly.com/podcast/psw.xml">RSS Feed</a></p>
<p>Weekly interviews and discussions about Information Security. It's made by Information Security professionals, and it also targets that same audience. But there are interesting things talked about for Software Developers too.</p>
<p>Update: Security Weekly now has several spin-offs each with a different focus (start-up security weekly, enterprise security weekly, ...). I personally usually listen to the &quot;Hack Naked News&quot; (what's in a name?) because the others are a bit too long, even for my commute! The news sticks usually to about half an hour.</p>
<p><strong><a href="https://risky.biz/">Risky Business</a></strong></p>
<p><a href="https://risky.biz/feeds/risky-business/">RSS Feed</a></p>
<p>&quot;News and current afairs&quot; in information security. Entertaining and knowledgable hosts, covering the events in &quot;infosec&quot;. May do with a little bit less cynisism and snark, but still very informative and not too long.</p>
<p><strong><a href="https://www.troyhunt.com/tag/weekly-update/">Troy Hunt's Weekly Update</a></strong></p>
<p><a href="http://www.omnycontent.com/d/playlist/1439345f-6152-486d-a9c2-a6bf0067f2b7/3ba9af7f-3bfb-48fd-aae7-a6bf00689c10/fde26e49-9fb8-457d-8f16-a6bf00696676/podcast.rss">RSS Feed</a></p>
<p><a href="https://www.troyhunt.com/">Troy Hunt</a> talks about his week. Which is somehow always a very interesting week. Troy Hunt specializes in teaching the world, companies and software developers in particular about information security. But sometimes he focusses on other interesting topics. Depending on what he's been up to.</p>
<p>Also, check out his blog for more of this stuff.</p>
<p><strong><a href="http://herdingcode.com/">Herding Code</a></strong></p>
<p><a href="https://feeds.feedburner.com/HerdingCode?format=xml">RSS Feed</a></p>
<p><a href="https://twitter.com/odetocode">K. Scott Allen</a>, Kevin Dente, <a href="https://twitter.com/lazycoder">Scott Koon</a> and John Galloway talk to guests about all the hip stuff in Software Development. Everything quite informal, entertaining and interesting.</p>
<p><strong><a href="https://danielmiessler.com/podcast/">Unsupervised Learning</a></strong></p>
<p><a href="http://www.omnycontent.com/d/playlist/070af456-729b-4a0f-9c09-a6c100397b59">RSS Feed</a></p>
<p>Daniel Miesslier curates about 3 to 5 hours of reading into a podcast of about. Keeps you informed and up to date.</p>
<p><strong><a href="http://www.se-radio.net/">Software Engineering Radio</a></strong></p>
<p><a href="https://feeds.feedburner.com/se-radio?format=xml">RSS Feed</a></p>
<p>This podcast from IEEE magazine is pretty hardcore. It is coming from a computer science perspective. It's interesting if you're up for it. Maybe not for a Monday morning.</p>
<p><strong><a href="https://softwareengineeringdaily.com/">Software Engineering Daily</a></strong></p>
<p><a href="http://softwareengineeringdaily.com/feed/podcast/">RSS Feed</a></p>
<p>Amazingly, this is a daily podcast. Covering a wide range of topics. It's almost impossible to listen to all of them. I have no idea how they're able to pull this off daily. Nonetheless, if you pick the topics you like you'll get some good info.</p>
<p>As a side note: What I really like is that a summary of the topic is the first thing you hear. So if you want to pass on a topic, you can do so right away. Many podcasts first have a few minutes of intro before you actually know the subject. This is sometimes not a problem, but when driving not all cars (I've seen none) display episode titles clearly.</p>
<p><strong><a href="https://changelog.com/">The Changelog</a></strong></p>
<p><a href="http://feeds.5by5.tv/changelog">RSS Feed</a></p>
<p>Focusses on giving airtime to the many smaller open source projects that are happening.</p>
<p>Update June 18, 2016: I feel I should say a few more things about The Changelog. They seem to have changed their angle a bit in the last few months. Focussing less on the latest hot thing and even more on the people behind open source project. Check out the shows that feature <a href="https://changelog.com/201/">Richard Hipp</a> (creator of SQLite), <a href="https://changelog.com/202/">Matz</a> (creator of Ruby) and especially the interview with <a href="https://changelog.com/205/">Pieter Hintjens</a>.</p>
<p>Easily one of the favourites.</p>
<p><strong><a href="http://webdevradio.com/">Web Dev Radio</a></strong></p>
<p><a href="http://webdevradio.com/feed/">RSS Feed</a></p>
<p>A web developer talks about developments in the web development world. Occasionally there is an interview as well.</p>
<h2 id="science">Science</h2>
<p><strong><a href="http://www.startalkradio.net/">Starktalk Radio</a></strong></p>
<p><a href="http://feeds.soundcloud.com/users/soundcloud:users:38128127/sounds.rss">RSS Feed</a></p>
<p>Neil deGrasse Tyson, interviews people about science. They also answer listener's questions. He is always joined by comedians, for comic relieve.</p>
<p>I believe it's now also a TV show on Discovery Channel.</p>
<p><strong><a href="http://www.nature.com/">Nature</a></strong></p>
<p><a href="http://feeds.nature.com/nature/podcast/current?format=xml">RSS Feed</a></p>
<p>Science news and interviews from nature.com.</p>
<p><strong><a href="http://www.physicscentral.com/">Physics Central Podcast</a></strong></p>
<p><a href="http://www.physicscentral.com/about/feed/podcasts.cfm?outputXML">RSS Feed</a></p>
<p>Physics news for the general public with an interest in that stuff.</p>
<p><strong><a href="http://www.scientificamerican.com/podcast/science-talk/">Scientific American Science Talk</a></strong></p>
<p><a href="http://rss.sciam.com/sciam/science-talk?format=xml">RSS Feed</a>.</p>
<p>Interviews and news about science, from Scientific American.</p>
<p><strong><a href="http://www.scientificamerican.com/podcasts/">Scientific American 60 seconds</a></strong></p>
<p>60 seconds (actually, usualy 2 minutes) of rapid fire science news. Easy to digest.</p>
<p>They have several 60 seconds series each focussing on another field. I'm listening to Science, Space and Tech. Here's the RSS Feed for all of them:</p>
<ul>
<li><a href="http://rss.sciam.com/sciam/60secsciencepodcast?format=xml">60 Seconds Science</a></li>
<li><a href="http://rss.sciam.com/sciam/60-second-space?format=xml">60 Seconds Space</a></li>
<li><a href="http://rss.sciam.com/sciam/60-second-tech?format=xml">60 Seconds Tech</a></li>
<li><a href="http://rss.sciam.com/sciam/60-second-earth?format=xml">60 Seconds Earth</a></li>
<li><a href="http://rss.sciam.com/sciam/60-second-health?format=xml">60 Seconds Health</a></li>
<li><a href="http://rss.sciam.com/sciam/60-second-mind?format=xml">60 Seconds Mind</a></li>
</ul>
<h2 id="aviation">Aviation</h2>
<p><strong><a href="https://pilottopilot.libsyn.com/">Pilot to Pilot</a></strong></p>
<p><a href="https://pilottopilot.libsyn.com/rss">RSS Feed</a></p>
<p>I've been training to fly lightweight aircraft, and to encourage some thinking about aviation I looked up some aviation podcasts.</p>
<p>I'm currently only on this one. But maybe I can find some more in the future.</p>
<h2 id="other">Other</h2>
<p><strong><a href="http://thisdeveloperslife.com/">This Developer's Life</a></strong></p>
<p><a href="https://feeds.feedburner.com/ThisDevelopersLife?format=xml">RSS Feed</a></p>
<p>Scott Hansleman (from the Hanselminutes podcast) explores some topics of life in general more in depth.</p>
<p>Noteable episode: <a href="http://thisdeveloperslife.com/post/2-0-2-pressure">2.0.2 Pressure</a>. Scott talks about pressure with the people that keep <a href="http://stackoverflow.com/">Stack Overflow</a> running.</p>
<p><strong><a href="http://www.radiolab.org/">Radiolab</a></strong></p>
<p><a href="http://feeds.wnyc.org/radiolab?format=xml">RSS Feed</a></p>
<p>Well produced documentaries about science and life in general. This is easily my favorite podcast of them all.</p>
<p>Notable Episode: <a href="http://www.radiolab.org/story/rhino-hunter/">The Rhino Hunter</a>. This episode shows the other, less popular, side of the <a href="https://en.wikipedia.org/wiki/Cecil_%28lion%29">Cecil the Lion</a> story. To me, it shows nicely how there is always more than one truth.</p>
<p><strong><a href="https://www.thecipherbrief.com/podcasts/intelligence-matters">Intelligence Matters</a></strong></p>
<p><a href="https://intelligencematters.libsyn.com/rss">RSS Feed</a></p>
<p>Former CIA Director, Michael Morell, interviews people from the U.S. intelligence community. Some interesting stories and knowledgeable views on the things the U.S. is up to in terms of intelligence and millitary.</p>
<p>I'm no american citizen, but it's interesting nonetheless. I'd like to have something like this closer to home. But this is very interesting and on topics that - let's be honest - concern people all over the world.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Switched to Ghost]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>As you can see, the blog changed looks, because I moved from Octopress to Ghost.</p>
<p>Recently, I got in some serious trouble with my Octopress blog, to the point where I couldn't publish any posts and I didn't see an easy fix.</p>
<p>I had switched from Ubuntu to Manjaro. Since</p>]]></description><link>https://blog.aaronlenoir.com/2015/10/22/switched-to-ghost/</link><guid isPermaLink="false">5d912fbe4cd92f0001765a9a</guid><category><![CDATA[english]]></category><category><![CDATA[blog]]></category><category><![CDATA[openshift]]></category><category><![CDATA[octopress]]></category><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Thu, 22 Oct 2015 23:06:10 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>As you can see, the blog changed looks, because I moved from Octopress to Ghost.</p>
<p>Recently, I got in some serious trouble with my Octopress blog, to the point where I couldn't publish any posts and I didn't see an easy fix.</p>
<p>I had switched from Ubuntu to Manjaro. Since then, I tried to preview and publish my octopress blog using Docker, so that I did not have to install specific stuff on my actual machine to generate my blog.</p>
<p>This turned out to give me a terrible time with Octopress:</p>
<ul>
<li>My version of Octopress was old and not compatible with newer Ruby and Rubygem stuff</li>
<li>Dependencies broke</li>
<li>I had to update Octopress, which seemed hard</li>
<li>I was scared to break everything (again) if I pushed to OpenShift</li>
</ul>
<p>This was so terrible that I needed a big change, if I still wanted to have a blog.</p>
<h2 id="lesspostsabouttheblog">Less posts about the blog</h2>
<p>I have the feeling that the quality of your blogging platform is inversely proportional to how much you blog about keeping your blog up and running.</p>
<p>So my goal was to not worry as much about the technical underpinnings of my blog and spend that time on trying out other things.</p>
<p>Writing a draft blog post should be quick and painless.</p>
<p>Of course, when  I really want to, I still want some control to fiddle around with the technical underpinnings. Preferably by my own choice and not because the blog is broken.</p>
<h2 id="options">Options</h2>
<p>So I started building a list for myself of what I thought was important for the blogging platform I'd use:</p>
<ul>
<li>Is it easy to update the software?</li>
<li>Is it easy to write posts?</li>
<li>Is it SaaS or do I host it myself?</li>
<li>Does it cost anything?</li>
<li>What dependencies does it have?</li>
<li>Does it have a nice templating option?</li>
<li>How much control do I have?</li>
</ul>
<p>And so on.</p>
<p>After some googling, I realised there were MANY platforms. Way too many for me to evaluate them all.</p>
<p>I settled on another approach: I'd check some of the more famous blogs I follow and see what they are using.</p>
<p>I ended up with four options:</p>
<ul>
<li><a href="https://jekyllrb.com">Jekyll</a></li>
<li><a href="http://octopress.org">Octopress (based on Jekyll)</a></li>
<li><a href="https://wordpress.com/">Wordpress</a></li>
<li><a href="https://ghost.org/">Ghost</a></li>
</ul>
<p>Octopress and Jekyll seemed popular with some types of people. But given my recent experiences, I wanted to move away from that. Note that they are working hard on Octopress 3, and I'm sure it'll improve on version 2 a lot.</p>
<p>Wordpress is very popular, and an obvious choice. However, Wordpress and especially the plugins have quite a bad reputation when it comes to security. Also, taking the most obvious choice is not that exciting. I might regret not just going for Wordpress later though, when I break the blog again.</p>
<p>And so there was Ghost. Used by some <a href="http://blog.codinghorror.com">serious</a> <a href="http://www.hanselman.com/blog">bloggers</a>. And easy to spin up on OpenShift, which I was already using (for free) to run my Octopress blog.</p>
<p>Getting it running with a Postgresql back-end took a few minutes and almost literally &quot;one click&quot;.</p>
<h2 id="movingfromoctopress">Moving from Octopress</h2>
<p>I had a shortlist of must-haves before I would seriously consider switching to Ghost.</p>
<p>It had to be possible to:</p>
<ul>
<li>update the Ghost software</li>
<li>import my existing blog-posts</li>
<li>force https</li>
</ul>
<p>All of these turned out to be relatively easy (except the update process, but possible nonetheless).</p>
<h2 id="ghostpositives">Ghost positives</h2>
<p>Ghost has some properties I really like:</p>
<ul>
<li>You can create and publish your posts in the web application itself.</li>
<li>In Octopress, this required a git push to OpenShift and a prayer it wouldn't break.</li>
<li>You get to see your Markdown in preview</li>
<li>The default theme is nice and there are many themes available</li>
<li>Easy to export the posts and configuration</li>
<li>In general, less technical fiddling, more writing.</li>
</ul>
<h2 id="ghostnegatives">Ghost negatives</h2>
<p>There are some downsides too:</p>
<p>I now have a dynamic site instead of the static pages of Octopress. You can log in and it has a database back-end. So I must watch out for security issues and update frequently.</p>
<p>I'm still self-hosting, so it's still a bit fiddly, but that is my own choice. I could've gone for the hosted solution too, which would eliminate the need for the tricky update process.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Octopress which, in all fairness, did have the slogan &quot;A blogging framework for hackers&quot; brought me to a tough spot. Hacking is much fun, but focussing on something else than keeping the blog alive also seemed quite promising. I felt the barrier for writing a new post was just higher than it should be, and it was too scary.</p>
<p>I'm looking forward to working with Ghost and hope to have a less bumpy ride than I had with Octopress.</p>
<p>But, lets face it, at some point I'll probably switch again just to try something new.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Fix My Blog: xC3 on US-ASCII]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>After a month or two of abscence from the blog I wrote up something small this week.</p>
<p>Quite unexpected, when I pushed my post to openshift for publishing, my blog broke.</p>
<p>The styles were stripped, but the blog was still usable, which is nice actually. It only looked like a</p>]]></description><link>https://blog.aaronlenoir.com/2015/07/16/fix-my-blog-xc3-on-us-ascii/</link><guid isPermaLink="false">5d912fbe4cd92f0001765a8f</guid><category><![CDATA[jekyll]]></category><category><![CDATA[ruby]]></category><category><![CDATA[english]]></category><category><![CDATA[blog]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Thu, 16 Jul 2015 21:17:09 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>After a month or two of abscence from the blog I wrote up something small this week.</p>
<p>Quite unexpected, when I pushed my post to openshift for publishing, my blog broke.</p>
<p>The styles were stripped, but the blog was still usable, which is nice actually. It only looked like a website from early 1995.</p>
<p>Anyway, the error I got from &quot;rake generate&quot; (which &quot;compiles&quot; the blog to static HTML using Jekyll) was:</p>
<blockquote>
<p>Error = jekyll 2.2.0 | Error:  &quot;\xC3&quot; on US-ASCII</p>
</blockquote>
<p>The funny thing was, it worked fine on my local machine, but not on the openshift gear. It did work fine the previous blog posts two months earlier.</p>
<p>First, cut the crap: I fixed it by setting the LC_ALL environment variable to en_US.UTF-8.</p>
<blockquote>
<p>export LC_ALL=en_US.UTF-8</p>
</blockquote>
<p>The reason for this is unknown ...</p>
<!-- More -->
<h2 id="whatwastheproblem">What was the problem?</h2>
<p>A simple google on the error always ended me on some <a href="https://github.com/opscode-cookbooks/chef-client/issues/213" title="Chef">chef</a> issues. At first I thought it was unrelated but since both were using Ruby it was plausible.</p>
<p>The solution they came up with was to set a value for LC_ALL explicetly.</p>
<p>This is an environment variable on linux, part of the &quot;locale&quot; settings. It seems to be empty.</p>
<p>This was strange, because the variable was also empty on my local machine, but I tried it anyway:</p>
<blockquote>
<p>export LC_ALL=en_US.UTF-8</p>
</blockquote>
<p>Just before I run rake generate. And sure enough, that solved the problem.</p>
<p>So I &quot;<a href="https://github.com/AaronLenoir/octopress-blog/commit/78fdda45a4fdc82965cbe9d2ecabff74f662393a" title="integration is also a buzzword">integrated</a>&quot; that in my openshift action hook, where I also run rake generate (is for another day).</p>
<h2 id="whatifirstthoughtwastheproblem">What I first thought was the problem</h2>
<p>I made a new blog post, but typed it first on windows in notepad. I thought, when copying it into my blog it took some &quot;illegal characters&quot; (whatever that means) with it.</p>
<p>But removing the new blog post solved nothing, so I got stranded on that one.</p>
<h2 id="whatistherootcause">What is the root cause?</h2>
<p>I have no idea.</p>
<p>When I look up the error in twitter, I get a small amount of tweets, spanning several years of people not understanding what the error is about.</p>
<p>Perhaps related, when I was apt-get upgrade'ing my other linux machine, it complained a lot about its inability to update LC_ALL.</p>
<p>So I don't know what the hell was going on, but I worked around the problem for now ...</p>
<p>Many questions remain:</p>
<ul>
<li>Why was the problem not present on my local machine?</li>
<li>Where was the \xC3 character?</li>
<li>Why is it solved by just changing LC_ALL?</li>
<li>Why did the same stuff work two months ago?</li>
<li>Is it Ruby?</li>
<li>And why did I bring helium instead of air?</li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>I <a href="https://twitter.com/lenoir_aaron/status/621445820836126720" title="Tweet where I cry about the error in solitude">hate this error</a>  because I don't know what or how or why it happened, but since I worked around it I have no time or energy to investigate it deeper. Not even Google and Twitter are helping me out on this one.</p>
<p>I'm writing it here because it's one of those stupid things you know you'll run into again in a year and you'll tell yourself: &quot;I've seen this before, but I don't remember how I solved it. If only I'd written it down somewhere&quot;.</p>
<p>If anyone knows what's going on, please drop me a line.</p>
<p>... Now let's see what happens when I push this post out.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Private Backup to Cloud Storage]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>As a reminder to myself mostly, here's a run-down of a backup &quot;script&quot; I made to backup a small amount of data (&lt; 1GB) to &quot;the cloud&quot;.</p>
<p>I have a local folder where I store private documents and some other things. I already back this up</p>]]></description><link>https://blog.aaronlenoir.com/2015/07/14/backup-to-cloud-storage/</link><guid isPermaLink="false">5d912fbe4cd92f0001765a9c</guid><category><![CDATA[english]]></category><category><![CDATA[cloud]]></category><category><![CDATA[backup]]></category><category><![CDATA[script]]></category><category><![CDATA[reminder]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Tue, 14 Jul 2015 22:04:50 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>As a reminder to myself mostly, here's a run-down of a backup &quot;script&quot; I made to backup a small amount of data (&lt; 1GB) to &quot;the cloud&quot;.</p>
<p>I have a local folder where I store private documents and some other things. I already back this up locally, but for these things I wanted to put an additional copy &quot;in the cloud&quot;, with something like dropbox.</p>
<p>Not that these files are super secret or that I'm an important target, but still this isn't stuff I just want out in the open online, so before I upload them to whatever cloud storage I want, the script encrypts them locally.</p>
<p><em>Update december 2015: SpiderOak's client is now called SpiderOakONE. Also the executable and location was renamed. But it seems to still work the same. More importantly <strong>spideroak no longer offers 2 GB for free.</strong> They now only offer a 60 day trial or 30 GB for $7 / month.</em></p>
<h2 id="generalidea">General Idea</h2>
<p>To create a batch script that will:</p>
<ul>
<li>Mount a truecrypt volume</li>
<li>Copy files to the mounted truecrypt volume</li>
<li>Unmount the volume</li>
<li>Trigger cloud storage sync of the truecrypt container file</li>
</ul>
<!-- more -->
<h2 id="completedscript">Completed script</h2>
<pre><code>&quot;C:\Program Files\TrueCrypt\TrueCrypt.exe&quot; /volume &quot;C:\Users\user\Documents\SpiderOak Hive\docs.tc&quot; /lz /quit /m ts /p &quot;THE_KEY_FOR_THE_CONTAINER&quot; /b
echo &quot;Wait for beep ...&quot;
pause
robocopy &quot;C:\Users\user\Documents&quot; &quot;Z:&quot; /E /Z
&quot;C:\Program Files\TrueCrypt\TrueCrypt.exe&quot; /q /dz
&quot;C:\Program Files\SpiderOakONE\SpiderOakONE.exe&quot; --batchmode -v
</code></pre>
<h2 id="prerequisites">Prerequisites</h2>
<ul>
<li>Truecrypt</li>
<li>Spideroak</li>
<li>Windows 7 (robocopy)</li>
</ul>
<h3 id="setuptruecrypt">Setup TrueCrypt</h3>
<p>Technically, TrueCrypt stopped existing somehow when the developers decided to stop everything. But you can still use it for general encryption purposes.</p>
<p>You currently can get an installer here: <a href="https://www.grc.com/misc/truecrypt/TrueCrypt%20Setup%207.1a.exe" title="TrueCrypt">TrueCrypt from GRC.COM</a> (yes, grc.com, let's face it, it's probably clean).</p>
<h3 id="setupspideroak">Setup SpiderOak</h3>
<p>I'm using SpiderOak as the no-cost cloud storage for my backup. Not because of the security features it offers but because you can trigger the sync from the command line (I could not find a way to do this with dropbox).</p>
<p>Just go to <a href="https://www.spideroak.com/">spideroak.com</a> and setup Spideroak. Turn off &quot;start on windows logon&quot;, unless you want SpiderOak running all the time. I don't because I don't really use it for that purpose.</p>
<h2 id="createtruecryptcontainer">Create TrueCrypt Container</h2>
<p>Create a new empty TrueCrypt container in your SpiderOak &quot;hive&quot; (the folder from which files are synced to SpiderOak).</p>
<p>I created a 1 GB container, which is enough to store the things I want to backup. SpiderOak gives you 2 GB at no-cost.</p>
<h2 id="part1mountthecontainerfromcommandline">Part 1: Mount the container from command line</h2>
<pre><code>&quot;C:\Program Files\TrueCrypt\TrueCrypt.exe&quot; /volume &quot;C:\Users\user\Documents\SpiderOak Hive\docs.tc&quot; /lz /quit /m ts /p &quot;THE_KEY_FOR_THE_CONTAINER&quot; /b
echo &quot;Wait for beep ...&quot;
pause
</code></pre>
<p>Break-down of the truecrypt command:</p>
<ul>
<li>/volume: just points to the truecrypt volume we'll be mounting</li>
<li>/lz: mount the volume on the &quot;Z:&quot; drive letter</li>
<li>/quit: perform requested actions and exit</li>
<li>/m ts: mount option to indicate we should NOT preserve timestamp. If we preserve the timestamp SpiderOak will not see the file as changed and it won't sync correctly</li>
<li>/p: the password for the truecrypt volume. I choose to put it in the script. This makes sense: if you can read this script, you likely can also read the files I'm backing up, since they're stored unencrypted on the same system.
<ul>
<li>If you don't want this, just leave the /p parameter out and TrueCrypt will ask you for a password or keyfile.</li>
</ul>
</li>
<li>/b: Beeps when the volume has been succesfully mounted</li>
</ul>
<p>The last option is also why I ask to &quot;wait for the beep&quot; after the truecrypt command. If we continue too fast we might start doing things before the volume is mounted.</p>
<h2 id="part2copystuff">Part 2: Copy stuff</h2>
<p>This is easy with robocopy:</p>
<pre><code>robocopy &quot;C:\Users\user\Documents&quot; &quot;Z:&quot; /E /Z
</code></pre>
<p>This will just copy everything to the Z: drive from the given folder. Adding /MIR would create an exact copy. In the above case, files deleted in the source will remain on the destination.</p>
<h2 id="part3unmounttruecryptvolume">Part 3: Unmount TrueCrypt volume</h2>
<pre><code>&quot;C:\Program Files\TrueCrypt\TrueCrypt.exe&quot; /q /dz
</code></pre>
<p>This tells Truecrypt to unmount the Z drive.</p>
<h2 id="part4triggerspideroaksync">Part 4: Trigger SpiderOak sync</h2>
<pre><code>&quot;C:\Program Files\SpiderOakONE\SpiderOakONE.exe&quot; --batchmode -v
</code></pre>
<p>Use --batchmode to ensure SpiderOak quits after the sync is complete, -v is &quot;verbose&quot;, which makes you get some feedback during the process.</p>
<p>Note that for a 1 GB file this will take a while the first time, but subsequent times will be faster because SpiderOak does a differential upload.</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Static File Server With Go (Part 3: Organizing the code)]]></title><description><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>In my previous post I talked about how I changed some of the default behaviour of Go's http.FileServer to fit my limited needs.</p>
<p>Basically, I had built a number of http.Handler implementations that would only hand over some of the incoming requests to http.FileServer, and ignore others.</p>]]></description><link>https://blog.aaronlenoir.com/2015/05/02/static-files-on-openshift-with-go-part-3/</link><guid isPermaLink="false">5d912fbe4cd92f0001765a8e</guid><category><![CDATA[english]]></category><category><![CDATA[go]]></category><category><![CDATA[go-fileserver]]></category><category><![CDATA[openshift]]></category><dc:creator><![CDATA[Aaron Lenoir]]></dc:creator><pubDate>Sat, 02 May 2015 21:12:01 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><!--kg-card-begin: markdown--><p>In my previous post I talked about how I changed some of the default behaviour of Go's http.FileServer to fit my limited needs.</p>
<p>Basically, I had built a number of http.Handler implementations that would only hand over some of the incoming requests to http.FileServer, and ignore others.</p>
<p>Obviously, this approach has some issues. Since I'm just blacklisting some requests there's a chance certain requests can be accepted that I actually did not want to accept. Also, in the whole thing I completely ignore any caching support.</p>
<p>But that is not the point here.</p>
<p>The point is that at first I had created all these handlers in one big Go file. And I was wondering how I could split this functionality up into a &quot;re-usable&quot; Go package. Here's what I tried ...</p>
<!-- more -->
<h2 id="gosworkspace">Go's Workspace</h2>
<p>On the golang website, there's a section called <a href="https://golang.org/doc/code.html" title="How to Write Go Code">How to Write Go Code</a>. The first thing that's introduced is the &quot;Workspace&quot;. Where you get dictated a very specific setup of your working directories.</p>
<p>Three folders are suggested:</p>
<ul>
<li>src</li>
<li>pkg</li>
<li>bin</li>
</ul>
<p>All source code in src, package objects in pkg and executables come in bin.</p>
<p>The name of the folder where you application or package's source code is under dictates how you can include it in other projects.</p>
<p>One interesting thing in the guide is this:</p>
<blockquote>
<p>&quot;If you keep your code in a source repository somewhere, then you should use the root of that source repository as your base path. For instance, if you have a GitHub account at github.com/user, that should be your base path.&quot;</p>
</blockquote>
<p>So, since I have a github.com account, I created a folder in my workspace:</p>
<ul>
<li>github.com/AaronLenoir</li>
</ul>
<p>I then created a new empty repository &quot;fileserver&quot; on github.com. And cloned it into</p>
<ul>
<li>src/github.com/AaronLenoir/fileserver</li>
</ul>
<p>I now had a go package with no source code.</p>
<h3 id="withoutgithub">Without github</h3>
<p>You could do this without github or any online source code repository and just create a local folder and reference that. But using github.com does have some advantages.</p>
<h2 id="importingthepackage">Importing the Package</h2>
<p>To import that newly created but empty package into another project, I could use the following import statement:</p>
<pre><code>import (
    &quot;github.com/AaronLenoir/fileserver&quot;
)
</code></pre>
<p>Of course, this will give an error:</p>
<pre><code>No buildable Go source files in ~/go/src/github.com/AaronLenoir/fileserver
</code></pre>
<p>This is because there is no code, yet.</p>
<h2 id="breakingupthecode">Breaking Up The Code</h2>
<p>There are many ways to go about this. But I always like to start writing down how I want to use my package and work from there.</p>
<p>So my goal was something like this:</p>
<pre><code>s := fileserver.Server(&quot;folder&quot;, {forceHttps: true});
</code></pre>
<p>The second argument would then have some flags to indicate what functionality I wanted to enable.</p>
<p>I'm not sure if it's &quot;idiomatic Go&quot; to pass a list of configuration settings to another function. Perhaps a better option is simply to pass actual arguments for each setting. But it worked for me at the time.</p>
<p>Doing it this way does mean that my single fileserver has quite some mixed tasks:</p>
<ul>
<li>Block requests for &quot;dot-file&quot;</li>
<li>Force https</li>
<li>Disallow directory listings</li>
</ul>
<h3 id="creatingtheserver">Creating the Server</h3>
<p>The hook into my package is the &quot;fileserver.Server()&quot; function, which loads up a &quot;server&quot;.</p>
<p>To create this function, I simply had to make a file in my package: fileserver.go.</p>
<p>In there, I simply add one function:</p>
<pre><code>func Server(root string, config Config, filters []Filter) http.Handler {
    // ...
}
</code></pre>
<p>It looks a bit different from my original plan, but the purpose is the same: to spin up a server with the desired characteristics.</p>
<p>The three arguments are:</p>
<ul>
<li>root: simply the path of the directory from which to start serving files</li>
<li>config: a simple struct with several bool fields representing configuration options</li>
<li>filters: an (optional) array of &quot;Filter&quot; implementations, can be nil
<ul>
<li>I use this to allow users to add their own functionality to filter requests, next to the &quot;built in&quot; ones</li>
</ul>
</li>
</ul>
<p>For the config, the struct looks like this:</p>
<pre><code>type Config struct {
    ForceHTTPS     bool
    AllowDirList   bool
    AllowDotPrefix bool
}
</code></pre>
<h3 id="organizingcodeinsidethepackage">Organizing Code Inside the Package</h3>
<p>For the Filter feature, I had to create a &quot;Filter&quot; interface:</p>
<pre><code>type Filter interface {
    Validate(path string) bool
}
</code></pre>
<p>Additionally, I had some implementations of filters, like:</p>
<pre><code>// Used to prevent serving of files that are private
// Currently only basic check: does the name start with a .?
type privateFileFilter struct {
}

// Check if the path refers to a file or directory whose
// names starts with a .
func (f *privateFileFilter) Validate(fullPath string) bool {
    basePath := path.Base(fullPath)
    isPrivate := (strings.HasPrefix(basePath, &quot;.&quot;))
    if isPrivate {
        return false
    } else {
        return true
    }
}
</code></pre>
<p>I could just add all that code to the same fileserver.go, but that's not very convenient. So I created a second file, &quot;Filter.go&quot;, and added it there.</p>
<p>I do not have to import that file anywhere in order for it to be included in the build of the package.</p>
<p>In the same way, I implemented the &quot;dot-file&quot; filter and https forcing in a seperate file called &quot;middleware.go&quot;. I don't know if that makes much sense but it did at the time, so that's good enough for now.</p>
<h2 id="testsingo">Tests in Go</h2>
<p>Go's tooling foresees some test infrastructure. If you run the command &quot;go test&quot;, go will try to find tests in your package and run them:</p>
<pre><code>~/go/src/github.com/AaronLenoir/fileserver$ go test
PASS
ok      github.com/AaronLenoir/fileserver   0.059s
</code></pre>
<p>I created some tests in a file &quot;fileserver_test.go&quot;. The &quot;go test&quot; command will only look at &quot;_test&quot; files. So the filename is important.</p>
<p>Here's one of my tests:</p>
<pre><code>// Tests the fact that a private file will yield a 404 error.
func TestGetPrivateFile(t *testing.T) {
    config := Config{ForceHTTPS: false, AllowDirList: false, AllowDotPrefix: false}
    recorder := httptest.NewRecorder()

    RunGetRequest(config, &quot;http://localhost:4003/test/.text.txt&quot;, recorder)

    if recorder.Code != 404 {
        t.Error(&quot;Expected 404, received &quot;, recorder.Code)
    }
}
</code></pre>
<p>Important to note here is that this function name must start with &quot;Test&quot;, and that it must have one argument &quot;t *testing.T&quot;).</p>
<h2 id="usingthepackage">Using the Package</h2>
<p>In the end, importing the package can be done by &quot;import&quot; of the following package name:</p>
<ul>
<li>&quot;github.com/AaronLenoir/fileserver&quot;</li>
</ul>
<p>If you use this package in a project, before you build you can run the &quot;go get&quot; command.</p>
<p>This will automatically fetch the git repo from github and put it in the expected location (src/github.com/AaronLenoir/fileserver).</p>
<p>This is the advantage of using something like github for your package.</p>
<pre><code>:~/go/src/openshift/files$ go build
main.go:24:2: cannot find package &quot;github.com/AaronLenoir/fileserver&quot; in any of:
    /usr/lib/go/src/pkg/github.com/AaronLenoir/fileserver (from $GOROOT)
    /home/aaron/go/src/github.com/AaronLenoir/fileserver (from $GOPATH)
:~/go/src/openshift/files$ go get
:~/go/src/openshift/files$ go build
</code></pre>
<h2 id="commentsanddocumentation">Comments and Documentation</h2>
<p>Go has a built-in system for generating reference documentation for packages based on the comments in the source files. It's accesible through the &quot;godoc&quot; command.</p>
<p>This command can either print documentation on the comman line or spin up a local webserver so that you can browse through the docs (like you would on golang.org for the <a href="https://golang.org/pkg/" title="Golang: Packages">Packages</a>).</p>
<p>For this to work, comments above functions, Interfaces, ... have to be a little bit structured.</p>
<p>For example, my main function to create a server has the following comments:</p>
<pre><code>// Creates an http file server that will serve the files present in the root
// folder and its subfolders.
//
// root: directory from which to serve static files
//
// config: some configuration settings to define the behaviour of the static file
// server
//
// filters: Optionally some fileserver.Filter implementations to add custom
// content filtering (based on the full path of the file)
func Server(root string, config Config, filters []Filter) http.Handler {
    handler := http.FileServer(http.Dir(root))

    if config.ForceHTTPS {
        handler = forceHttps(handler)
    }

    internalFilters := getFilters(config)
    copy(internalFilters, filters)

    handler = protectFiles(handler, internalFilters, root)

    return handler
}
</code></pre>
<p>Running the following command will let me view the docs of my package on: <a href="http://localhost:6060/pkg/github.com/AaronLenoir/fileserver/">http://localhost:6060/pkg/github.com/AaronLenoir/fileserver/</a></p>
<pre><code>godoc -http:6060
</code></pre>
<p>Note how <a href="http://localhost:6060/">http://localhost:6060/</a> is now basically running the golang website. But if you browse to &quot;Packages&quot; you will also see your local packages documented.</p>
<h3 id="godocorgpackagesearch">Godoc.org Package Search</h3>
<p>One other advantage of putting your packages online on github is that you will automatically get your code documentation available on <a href="https://godoc.org">godoc.org</a>.</p>
<p>As an example, visit <a href="https://godoc.org/github.com/AaronLenoir/fileserver" title="Fileserver docs">https://godoc.org/github.com/AaronLenoir/fileserver</a> to view the latest docs from my little experiment.</p>
<p>This can help you browse through the mass of go projects on github.</p>
<h2 id="conclusion">Conclusion</h2>
<p>To really do idiomatic Go all the way, I probably know too little about it. But at least I was able to:</p>
<ul>
<li>Get the functionality that I wanted</li>
<li>Package and documented it correctly</li>
<li>Get it online and available for re-use</li>
</ul>
<p>And this was all relatively easy, which means it is indeed possible to quickly become productive with Go. But as always, it takes a lot longer to become an expert.</p>
<p>Now all I need is something to actually server through my fileserver ...</p>
<!--kg-card-end: markdown--><!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>