Log in

No account? Create an account

Originally published at www.ikriv.com. Please leave any comments there.

A few days ago I ran into a situation when I forgot to add --no-ff when merging a feature branch into master. The result was a mess.

State before the final merge:
Before merge

State after switching to master and doing git merge feature (bad state!):
merge without --no-ff

State after donig git merge --no-ff feature instead (good state):
merge with --no-ff

The flow of events was as follows:

  1. Create ‘feature’ branch off master
  2. Add some commits to feature and to master
  3. Merge latest master into feature to resolve any conflicts
  4. Work on the feature a little more

Now the feature is ready to be merged back into master. Since “feature” has all commits that “master” has plus some more, git by default will simply fast-forward “master” to be the same as “feature”, and the feature branch will effectively befcome the new master. It does not look good: feature and master commits are now mixed in the main branch line. The annoying part is that if it has already been pushed to origin, it is not easy to fix without force-pushes, and force-pushes are not always enabled. The end result is that you get very confusing commit history for the rest of the life of the project.

If you do use --no-ff, the history is preserved much better: all feature commits are kept in a side branch and only the merge is in the main line.

Click to download the batch file that demonstrates the problem (rename from .txt. It creates the “bad” case by default. Pass it --no-ff argument to create history without fast forward.


Я пришел к выводу, что с дело с русской орфографией обстоит из рук вон плохо, и надо что-то менять, приближать орфографию к живому языку. Современная орфография - это минное поле, на котором даже профессионалы с высшим образованием регулярно делают ошибки, а "средний" человек, кажеться, и во все неимеет шансов выплыть на верх (4 ошибки на 6 слов) и тонет под волнами мягких знаков в возвратных глаголах, приставок-предлогов и идиотских чередований типа раст-рост.

Что именно делать, я не знаю, я не лингвист, но это же ужос какойто.

How to do console output in WPF

Originally published at www.ikriv.com. Please leave any comments there.

If you wish to access standard input and output in your WPF app, you will have to compile it as a Console app and fire up the WPF application object manually.

Sample application: https://github.com/ikriv/WpfStdOut.

Attaching to the parent process console, as proposed by this CSharp411 article (thanks to Roel van Lisdonk for the pointer) insists on writing to the actual console and does not work well with redirected output.

Steps to apply to your application:

  1. Create WPF application as you would normally.
  2. You can use Console.Out.WriteLine(), but it would not go anywhere.
  3. Add Program.cs file with the following boilerplate code:
    public static void Main(string[] args)
        var app = new App();
  4. Right click on the project, and choose Properties→Application.
  5. Change “Startup object” to YourNamespace.Program.
  6. Change “Output type” to “Console Application”>.

The downside is that when you run the WPF app from a shortcut, it will show a black console window in the background. This window can be hidden as described in this StackOverflow article.

This is not a major problem though, since the purpose of the console output is to either show it in the command line window, or to redirect it to a file or some stream. Push come to shove, one can create a start that launches the “console” application with a detached console (ProcessStartInfo.CreateNoWindow).


Insane Apache RewriteRules

Originally published at www.ikriv.com. Please leave any comments there.

I have just spent a few fun hours fighting Apache rewrite rules in an .htaccess file. It turns out that temporary redirects just work, but permanent redirects require RewriteBase! Frankly, this is insane.

Suppose, I have my web site test.ikriv.com that lives in /var/www/html on the server disk. Let’s say I have the following contents in /var/www/html/.htaccess:

RewriteEngine on
RewriteRule ^(.*)\.temp$ $1.html
RewriteRule ^(.*)\.perm$ $1.html [R=permanent]

If type in the browser https://test.ikriv.com/file.temp, I see the contents of /var/www/html/file.html, as expected. The server does not report the redirect back to the browser, but quietly serves file.html instead of file.temp.

If, however, I type https://test.ikriv.com/file.perm I get redirected to https://test.ikriv.com/var/www/html/file.html and then get error 404, telling me such file is not available.

The root cause seems to be that in both cases mod_rewrite converts the incoming URL into var/www/html/file.html. In the first case it just serves this file, which works great, but in the second case it returns the file path to the browser, who interprets that as a URL, and it all goes downhill from there. To fix the problem, I must add RewriteBase to the .htaccess file. To be fair, Apache docs do say it is required, but they don’t say it is required only for permanent redirects. Most sane people would assume that if temporary redirects work, permanent redirects will work as well, but it is not the case.

RewriteEngine on
RewriteBase /
RewriteRule ^(.*)\.temp$ $1.html
RewriteRule ^(.*)\.perm$ $1.html [R=permanent]

Note that I need to specify the base URL of the directory. I.e., if I have .htaccess file in some other directory, I must add RewriteBase /that/dir/url. This is quite inconvenient, because

a) as I move directories around, RewriteBase may become broken, and
b) if a directory can be reached via multiple URLs, I will have to pick one as RewriteBase, with possible security issues and other inconveniences.

Why can’t we just make things work without surprises? That’s obviously, a rhetoric question: one’s person surprise is another person completely obvious and logical behavior.


Вместо эпиграфа анекдот:
- Микола, пойдем жидов бить!
- А если они нас побьют?
- А нас-то за что?!


В статье обсуждается как злой фейсбук банит женщин за "невинные" комментарии в стиле "все мужики козлы". Особый упор делается на то, что ничего плохого на самом деле женщины сказать не хотели, просто очень уж их мужики достали. Вот скажите мне, почему борцы за всяческое равноправие всякий раз начинают истерически визжать "а нас-то за что", когда их правила вдруг начинатю применять к ним, любимым? Может, не равноправия они хотели, а чего-то еще?
Фразу из заголовка достойна Дживса, хотя пренадлежит простому автомеханику. Большинству из нас, Джумшутов, никогда не достичь такого уровня владения английским языком. Но запомнить готовую фразу никто нам помешать не помешает, n'est pas?

Сама история, впрочем, куда интереснее фразы. Похоже, в Америке есть места покруче Брайтон Бич.


Cloning an Azure virtual machine (long)

Originally published at www.ikriv.com. Please leave any comments there.

I needed to move my virtual machine from one MSDN subscription to another. There is a “Move” option in Azure portal, but it works only if you can access both subscriptions from the same account. Alas, I was not so lucky, so I had to do it “the old way”.

The general idea is to copy the virtual disk (VHD) of the old virtual machine to a blob managed by the new subscription and create a VM from it. Unfortunately, there is no “create from custom VHD” option in Azure portal. It can only create virtual machines from predefined images like Ubuntu or Windows Server. So, one has to use Powershell. I followed this guide, but had to do some modifications.

  1. Stop the source virtual machine.
  2. Use AzCopy to copy VHD blob to a container under the new subscription.
  3. Create Subnet, Virtual Network, IP address, Network Security Group, and NIC as described in the guide. You will need an SSH rule instead of the RDP rule, but you can configure it later using the Azure Portal.
  4. Create the OS disk using the command below. DO NOT use the one in the guide.
  5. Continue to create the VM as in the guide.

OS Disk

Azure Virtual Machines have tons of properties, but the most difficult part is the OS Disk. I’ve got a lot of weird error messages before I was done. The magic command is

$vm = Set-AzureRmVMOSDisk -VM $vm -DiskSizeInGB 128 -CreateOption Attach -Linux -Caching ReadWrite -VhdUri $mydiskUri -Name $mydiskname

Strange Errors

It is critical to specify -CreateOption Attach and give it a name. My first attempt actually created a Virtual Machine in a “failed” state, with a message, I quote

OS Provisioning for VM 'vmname' did not finish in the allotted time. However, the VM guest agent was detected running. This suggests the guest OS has not been properly prepared to be used as a VM image (with CreateOption=FromImage). To resolve this issue, either use the VHD as is with CreateOption=Attach or prepare it properly for use as an image.

I also saw a

Cannot attach an existing OS disk if the VM is created from a platform or user image.


New-AzureRmVM : The entity name 'osDisk.name' is invalid according to its validation rule:

The root cause of the last one was that the OSDisk name was empty, but hey, why not show a cool regular expression instead? Microsoft is usually pretty good with error messages, but these ones were not really helpful.

Getting to see both subscriptions under same account

Microsoft account system is highly confusing.  The set of resources you see is determined by your login (naturally), whether it’s a work or personal account, and by “directory”. “Work” account does not mean much, somehow my “work” account is still authenticated on the Microsoft server, but that’s another story. There are all kinds of limitations what subscriptions can be seen/administered by whom. I ended up with quite a few permutations:

  • Email = my_personal_email, Work=false, Directory=org1; sees subscription 1
  • Email = my_personal_email, Work=true, Directory=ikriv.com; sees nothing
  • Email = my_personal_email, Work=true, Directory=org2; sees subscription 2
  • Email = my_org1_email, Work=false, Directory=org1; sees subscription 1
  • Email = my_org2_email, Work=false, Directory=org2; sees subscription 2

The only permutation I was unable to get is to have one account that would see both subscription 1 and subscription 2, so I could not use “move” functionality of Azure portal.

Copying your VHD

Azure VM disks are stored as simple blobs in Azure storage, so anyone can copy them as long as they have the blob container access keys. No account information is required.

You cannot copy the VHD while the source VM is running. VHDs tend to be tens of Gigabytes in size. To avoid paying for traffic, you want to keep the destination in the same data center as the source, and you want to use AzCopy utility as opposed to just downloading the data to your machine and uploading it back to Azure. NB: AzCopy is not part of the Azure SDK, it is, at the time of writing, a part of “Azure Storage Tools” download. I am not adding the link here, because it will probably be obsolete in a year or two. Things move fast in Azureland.

Creating the destination VM

Since creating my actual VM took several tries, I am just pasting the commands together. I haven’t tested this script, so use it as a starting point. Be careful to double check all names before running. It is almost impossible to rename anything in Azure once created. E.g. I now have a security group literally named “myNsg” 🙂

$rgpName = 'myResourceGroup' 
$vmName = 'myVM'
$location = 'eastus' 
$vhdUri = 'https://...'
$subnetName = 'mySubNet'
$vnetName = "myVnetName"
$nsgName = "myNsg"
$ipName = "myIP"
$nicName = "myNicName"
$vmName = "myVM"
$vmSize = "Standard_D1_v2"

$singleSubnet = New-AzureRmVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix

$vnet = New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $location `
    -AddressPrefix -Subnet $singleSubnet

$rdpRule = New-AzureRmNetworkSecurityRuleConfig -Name myRdpRule -Description "Allow RDP" `
    -Access Allow -Protocol Tcp -Direction Inbound -Priority 110 `
    -SourceAddressPrefix Internet -SourcePortRange * `
    -DestinationAddressPrefix * -DestinationPortRange 3389

$nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName $rgName -Location $location `
    -Name $nsgName -SecurityRules $rdpRule

$pip = New-AzureRmPublicIpAddress -Name $ipName -ResourceGroupName $rgName -Location $location -AllocationMethod Static

$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $Name `
    -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id

$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize

$vm = Set-AzureRmVMOSDisk -VM $vm -DiskSizeInGB 128 -CreateOption Attach -Linux -Caching ReadWrite -VhdUri $mydisk -Name "ikrivblog3"

$vm = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
$vm = Set-AzureRmVMOSDisk -VM $vm -DiskSizeInGB 128 -CreateOption Attach -Linux -Caching ReadWrite -VhdUri $vhdUri
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm


The world of Azure continues to be confusing and moving fast. You routinely find obsolete screenshots of web pages that no longer exist or look quite different now, and references to commands that no longer work. In a way, Microsoft has adopted the Linux-y development model of “do something now, make it look good later”, which does not always yields the best results. In the end of the day I was able to accomplish the task, but it took me a few hours to get there. Perhaps in the future they will add an option to simply create a VM from VHD and be done with it.

I also wonder what is the way to create N identical virtual machines that perform some kind of  network distributed task, but that’s a matter for another conversation.


Template Factory Builder

Originally published at www.ikriv.com. Please leave any comments there.

Hurrah, our project actually has a class named (Something)TemplateFactoryBuilder!

What does it do? Well, it’s very easy. It takes a serialized string and creates a factory, which will produce a template, which, when given a Something, will create a graphical representation of said Something. It may seem that Steve Yegge was exaggerating in his “Execution in the Kingdom of Nouns“, but no, he was not.

I guess the next step would be SomethingTemplateFactoryBuilderProvider followed, perhaps, by SomethingTemplateFactoryBuilderProviderGenerator, and then by SomethingTemplateFactoryBuilderProviderGeneratorCreator, at which point the project would probably run out of steam and quietly forgotten. After all, perfection is achieved on the point of collapse.


US tax code: engineered to amaze

American tax code does not stop to amaze me.

I just found out that HSA contributions made directly from payroll are excluded from Social Security and Medicare taxes, but contributions made out of pocket are not. That's a difference of 7.65%. I.e., if you decide to contribute to HSA from after-tax money, you lose $76.50 on every thousand contributed. Both kinds of contributions are excluded from the Federal income tax (up to the statutory limit).

I guess the reason for that deduction of federal taxes is controlled by the employee (albeit through a very convoluted system of allowances), and then any excess taxes paid can be reclaimed in the yearly tax return. Deduction of social security and medicare taxes is automatic, and there is no such thing of "medicare tax return": what is taken will not be coming back. Technically, federal tax return has a line for "excess social security paid" (e.g. if you had two jobs and earned more than the social security limit combined), so return of at least the Social Security portion of the taxes (6.2%) could be arranged, but the legislators did not bother.

Bottom line: if you have HSA, make sure you deduct contributions from payroll, it's 7-ish percent cheaper that way.

PS. If you make more than the social security contribution limit ($127,200 in 2017), it matters less, because you are going to pay the maximum social security anyway, and you lose only the relatively small medicare part of 1.45%. But it is still amazing.

DotNetCore App on Ubuntu: you need libunwind8

Originally published at www.ikriv.com. Please leave any comments there.

DotNetCore provides two types of installs: framework-dependent install and self-contained install.

I tried to compile a hello-world app on Windows and run it on Ubuntu that has neither dotnetcore SDK nor dotnetcore runtime.
With the original Visual Studio 2017 and dotnetcore 1.1 it did not work, even on Windows. Published folder for “complete” app did not even contain an executable file. I had better luck once I upgraded to Visual Studio 2017 version 15.3 and dotnetcore 2.0. The package is 65M uncrompessed, but heck, it contains the whole framework, so that was kind of expected.

Two small notes:

– I had to mark all files in the app directory as runnable (chmod 755 *). Making only the app file runnable did not work. I was lazy and just updated them all.

– Initially I received this weird error:

Failed to load �a�, error: libunwind.so.8: cannot open shared object file: No such file or directory
Failed to bind to CoreCLR at '/home/ikriv/bin/dnc/libcoreclr.so'

Three squiggle characters after “Failed to load” were indeed garbage. I searched the net and found that per https://github.com/dotnet/cli/issues/3390 one needs to install libunwind8 package, or else .NET apps won’t work.

sudo apt-get install libunwind8

did the trick. I wish, however, that the runtime were more explicit about the dependencies it needs.

Overall, the experience with dotnetcore 2.0 was not so bad, I’d say 4 stars out of 5.
Dotnetcore 1.1 gets zero stars, since “complete” setup was hopelessly broken, even on Windows.


Originally published at www.ikriv.com. Please leave any comments there.

Technical decisions in software engineering are often driven by politics. The architecture committee decides that “all new code should use threading library X” or “we should be moving to using e-mail system Y”, and everyone is supposed to follow. This may be annoying, but true pain begins when the architects committee changes its mind. This, however, is not really new or specific to software development.

I was thinking on how Uzbek and many other languages of Soviet minorities changed their alphabet 4 (four) times over as little as 70 years. Changing language’s alphabet is a huge engineering project not unlike changing data format of a big database. You need to change how books and newspapers are printed, update road signs and education materials, design and manufacture new typewriters and/or computer keyboards, teach millions of people how to use the new alphabet, and in the process you break backward compatibility with existing body of printed texts.

Uzbek Alphabet 1.0: Arabic based

Uzbek language was originally written in Arabic script, which was arguably a poor match to the language structure, and the literacy rate was low. When the Soviets came, their goal was to increase the literacy rate: partly because they shared some ideas of Enlightenment, and partly because they wanted to influence the population via printed propaganda.

Uzbek Alphabet 2.0: Latin based

In 1924 new Uzbek alphabet was designed based on the Latin script, and it was gradually released into production from 1928 to 1940. Early Soviets were not Russian nationalists, in fact they were internationalists that were expecting the World Revolution to begin soon. Therefore, Latin script was considered more suitable for new development, even though it was not compatible with Cyrillic, the script used by the majority of the population of the USSR.

Uzbek Alphabet 3.0: Cyrillic based

However, during this time the political tide had changed. The World Revolution did not happen. USSR became a highly centralized states, with national republics having only token rights. Russian became virtually the only language of politics and business, and Latin alphabet fell out of favor. In 1940 Uzbek was quickly transitioned to a new alphabet based on Cyrillic, and using Latin script became gravely frowned upon. Uzbek is not unique in this transition: dozens of other languages were switched from Latin to Cyrillic around that time.

Uzbek Alphabet 4.0: Latin again

Cyrillic alphabet was used exclusively until 1992, when Uzbekistan gained independence. The new Uzbek state did not want to align with Soviet/Russian legacy, and wanted closer ties with Turkey. They implemented another version of the alphabet based on the Latin script, but different from the one from 1924: it was more similar to Turkish. Deploying version 4.0 to production, however, was met with some difficulties. Independent Uzbekistan did not have the resources or the political will of the Soviet Union, so the process was a long one, and still continues, with versions 3.0 and 4.0 coexisting, and with new Latin script gradually replacing the Cyrillic.

Parallels with software development

Similar stories occur daily in the software development world, creating huge mess. Imagine the state of the body of Uzbek literature: it currently consists of at least four mutually incompatible corpora, which probably resembles the state of many existing databases. Political powers rarely consider their own mortality when making important technical decisions. Unfortunately, it is usually impossible to avoid the devastating effects of such major changes, but our job as engineers is to mitigate them as much as possible.

If the alphabet has changed once, it will probably change again. So, each data entry in Uzbek should include the version of the alphabet somewhere in the metadata. Automatic procedures for conversion between alphabets would also be helpful. Or, we should just use Latin language to store data: it is dead and therefore unlikely to change 🙂


Я тут на работе слушаю иногда музыку - наткнулся на пианистку по имени Валентина Лисица. Хорошо играет, однако.

Оказалось, что эта Лисица, родившись в Киеве, является активной антимайданщицей, пропутинцей и борцом (борицей?) с фашистской хунтой, давала концерты в Донецке и все такое, за что филармония Торотно отменила ее концерт.

Я, разумеется, в корне не согласен с ее ватной политической позицией, но считаю такого рода гонения одной из глубоких болезней современного западного общества. Задача музыканта - хорошо играть, а что он там пишет в Твиттере - это второй вопрос. А играет она обалденно, несмотря ни на какую вату в голове.

Получается, что политическая деятельность позволяется, покуда ее направленность совпадает с политической направленностью руководства филармоний, стадионов и т.п., а как не совпадает - "да как же мы можем пригласить Васю, он же фашист, вдруг на нас тоже подумают?". U-2 вон прямо на концертах несут откровенно политически окрашенную розово-сопливую левую фигню, и ничего, нормально все, никто их не запрещает, даже наоборот.

Чем эта ситуация принципиально отличается от советского худлита?

TaskTimer: how to execute task on a timer

Originally published at www.ikriv.com. Please leave any comments there.

Over the week-end I created the TaskTimer class that allows to execute code on timer in an async method:

static async Task DoStuffOnTimer()
    using (var timer = new TaskTimer(1000).Start())
        foreach (var task in timer)
            await task;

Source code: https://github.com/ikriv/tasktimer

Nuget package: https://www.nuget.org/packages/TaskTimer/1.0.0


To my astonishment, I have found that .NET Task Library does not (seem to) have support for executing a task on timer, similar to Observable.Interval(). I needed to execute an activity every second in my async method, and the suggestions I found on StackOverflow were quite awful: from using Task.Delay() to using a dedicated thread and Thread.Sleep().

Task.Delay() is good enough in many cases, but it has a problem. Consider this code:

static async Task UseTaskDelay()
    while (true)
        DoSomething(); // takes 300ms on average
        await Task.Delay(1000);

One execution of the loop takes 1300ms on average, instead of  1000ms we wanted. So, on every execution we will accumulate an additional delay of 300ms. If DoSomething() takes different time from iteration to iteration, our activity will not only be late, but it will also be irregular. One could, of course, use real-time clock to adjust the delay after every execution, but this is quite laborious and hard to encapsulate.

Using TaskTimer

Browser for “TaskTimer” package on NuGet and add it to your project. Alternatively, you can get the code from GitHub. The code requires .NET 4.5 or above.

You will need to choose the timer period, and the initial delay, or absolute start time:

// period=1000ms, starts immediately
var timer = new TaskTimer(1000).Start();

// period=1 second, starts immediately
var timer = new TaskTimer(TimeSpan.FromSeconds(1)).Start();

// period=1000ms, starts 300ms from now
var timer = new TaskTimer(1000).Start(300);

// period=1000ms, starts 5 minutes from now
var timer = new TaskTimer(1000).Start(TimeSpan.FromMinutes(5));

// period=1000ms, starts at next UTC midnight
var timer = new TaskTimer(1000).StartAt(DateTime.UtcNow.Date.AddDays(1));

The returned object is a disposable enumberable. You must call Dispose() on it to properly disposer of the underlying timer object. The enumerable iterates over an infinite series of tasks.  Task number N becomes completed after the timer ticks N times.  The enumerable can be enumerated only once.

Avoiding infinite loop

The enumerable returned by the timer is infinite, so you cannot get all the task at once.

// won't work; this would require an array of infinite length
var tasks = timer.ToArray();

You can use foreach on the timer, but that would be an infinite loop:

using (var timer = new TaskTimer(1000).Start())
    // infinite loop
    foreach (var task in timer)
        await task;

If you know the number of iterations in advance, use Take() method:

// loop executes 10 times
foreach (var task in timer.Take(10))
    await task;

If you don’t, you can use a cancellation token:

async Task Tick(CancellationToken token)
    using (var timer = new TaskTimer(1000).CancelWith(token).Start())
            foreach (var task in timer)
                await task;
        catch (TaskCanceledException)
            Console.WriteLine("Timer Canceled");

About Disposable Enumerable

I initially implemented the timer sequence as generator method using the yield keyword. You can see this in the GitHub history:

public IEnumerable<Task> StartOnTimer(...)
    using (var taskTimer = new TaskTimerImpl(...))
        while (true)
            yield return taskTimer.NextTask();

The trouble with this approach was that calling the Start() method did not actually start the timer: the timer would be started only when one begins to iterate over it. Furthermore, each iteration will create a new timer. I found this too confusing. I had enough trouble understanding similar behavior in observables to inflict this on the users of my own class.

So, I decided to switch to a schema where the timer gets started immediately upon calling of the Start() method. In reactive terms, TaskTimer is a hot observable. To stop the timer, one needs to dispose the return value of the Start() method, hence the disposable enumerable. To use the timer properly, one should put a foreach loop inside a using block like in the examples, above.

Also, it is only possible to iterate over the timer once. The following will not work:

using (var timer = new TaskTimer(1000).Start())
    // loop executes 10 times
    foreach (var task in timer.Take(10))
        await task;

    // execute 20 more times -- WON'T WORK! Throws InvalidOperationException!
    foreach (var task in timer.Take(20)) ...

It should be possible to implement a timer with multiple iteration semantics, but I decided it was not worth the complexity.


It’s a shame Task Library does not provide timer object out of the box. I hope I was able to fill the gap, at least to some extent. In the process I ran into interesting problems with the way IEnumerable, IEnumerator and IDisposable intersect, this is a fascinating world. I hope you do enjoy the result!


Originally published at www.ikriv.com. Please leave any comments there.

Our WPF started to hang when opening certain window, and debugging revealed an InvalidOperationException with this mysterious message. After some digging I found that we had code along the lines of

<Style TargetType="{x:Type xctk:DoubleUpDown}" x:Key="bla" ... />
<xctk:DoubleUpDown Style="{StaticResource bla}" ... />

Somehow it happened, that the control was coming from one assembly: Xceed.Wpf.Toolkit.DoubleUpDown, Xceed.Wpf.Toolkit, Version=, and the style was targeting a type in another assembly: Xceed.Wpf.Toolkit.DoubleUpDown, WPFToolkit.Extended, Version=

These two assemblies are in fact two versions of Xceed WPF toolkit: they contain the same classes assigned to the same XML namespace. Our mistake was to reference both the old and the new assembly, which led to confusion.

When we say xctk:DoubleUpDown, it may map to either of those assemblies, and I am not sure how exactly WPF makes the choice, but apparently the choice may be inconsistent and thus lead to cryptic error messages.

There are several takeaways/anti-patterns that played a dirty trick on us in this case:

  • Xceed should have changed XML namespace, and also the CLR namespace when they changed the assembly name. Creating two different assemblies with the same classes is asking for trouble.
  • WPF should have included full class information somewhere in the error message. Don’t assume that class name or even fully qualified class name uniquely identifies the class.
  • Painting errors should not be fatal. The above XAML error triggered an exception during ‘measure’ phase of each re-paint cycle, effectively hanging the application. There should have been a better way to recover from it, e.g. show an empty window or something.


Набрел в Ютюбе на песню со словами "На трассе Улан-Батор - Алматы мой бензобах обсох посреди пустоты".

Внезапно оказалось, что согласно Гугл Мэпс между Уланбатором и Алма-Атой почти 3500км и 50+ часов езды. Я думал, намного меньше. К тому же, как пишут в Интернетах, "на прохождение границы, даже если нет очереди, уходит от 2 до 4 часов, но обычно из-за очередей на границе переход на своем автомобиле, например в Кяхте, занимает целый день".

А международный аэропорт Улан-Батора гордо носит название Chinggis Khaan International Airport, по-монгольски "Чингис хаан олон улсын нисэх буудал". Не знаю, кого он там буудал, но это, по-моему, мощно. Предалагаю развить идею.

Лондон: международный аэропорт имени Генриха Восьмого
Париж: международный аэропорт имени Карла Великого
Берлин: международный аэропорт имени Бисмарка
Рим: международный аэропорт имени Юлия Цезаря
Скопье: международный аэропорт имени Александра Македонского
Москва: международный аэропорт имени Ивана Грозного
и, наконец,
Тегеран: интерконтинентальный аэропорт имени Артаксеркса.

Вот у Саддама, ЕМНИП, была дивизия Хаммурапи, но то дивизия, это другое дело.

PS. Ви таки будете смеяться....

Does that source match the binary? Part 2

Originally published at www.ikriv.com. Please leave any comments there.

I wrote a tool called “IsItMySource” that can list source files of a binary and, more importantly, check whether particular source directory matches the binary.

The description is here: https://www.ikriv.com/dev/dotnet/IsItMySource/index.php.

The code is on GitHub: https://github.com/ikriv/IsItMySource.


Ко второму куплету пчелы стали что-то подозревать... На момент соло (1:56) все сомнения пропали. Расследование показало, что Eagles таки были первыми :) Хотя Арик Айнштайн, конечно, тоже голова.

The confusing world of Git merge options

Originally published at www.ikriv.com. Please leave any comments there.

The default git merge is usually all you need, but if you have to deep deeper, you’ll find that git merge command has very weird looking and confusing options.

I created https://github.com/ikriv/merges repository that shows the result of applying different variants of git merge command.

For starters, git merge -s ours xyz is not the same as git merge -X ours xyz. The first uses merge strategy called “ours”, and the second uses default ‘recursive’ merge strategy, but with “ours” option. Creating two entities named “ours” that do similar, but subtly different things is the epitome of  bad interface design.

The “-s” variant creates a commit that merges current branch and “xyz”, but ignores all changes from xyz, so the resulting content would be identical to the current branch. This is useful for getting rid of unwanted branches when force-pushes and deletes are disabled. The “-X” variant only ignores changes from xyz that cause a conflict. Changes from xyz that do not cause a conflict will be included in the end result.

Similarly, git has git merge -X theirs xyz, which will merge current branch and xyz, but will ignore changes from the current branch that cause a conflict and will takes those from xyz instead.

Git annoyingly lacks git merge -s theirs xyz, which would merge current branch with xyz and make the result identical to xyz. This would be very useful to simulate force pushes, but alas. I naively tried to use git merge -X theirs xyz instead and got into hot water.

“Theirs” merge strategy that ignores the current branch can be simulated by creating a temporary branch and using “ours” strategy on it, but why do we need to jump through those hoops?



Originally published at www.ikriv.com. Please leave any comments there.

One large company that shall remain nameless runs a 4-year old version of Git, TortoiseGit, and a lot of other tools, and can’t upgrade.

Being a large company,  it can’t just let every developer install the newest version of whatever they please. That would be just wrong, wouldn’t it? They have an infrastructure group that is supposed to “onboard” external products and make them “kosher”.

However, just rubber-stamping someone else’s work is boring. The infrastructure group are smart creative people, so they decided… to modify Git a little.

Since the large company acts like a software black hole – code can come in, but it cannot come out – the customizations did not make it back to public Git. The world moved forward creating new versions of tools that are not compatible with those private customizations.

Apparently, the infrastructure group cannot, or does not want to keep up with that flow of changes, so they are stuck in 2013, and the whole company is stuck with them. Huzzah!



Веселый носорог
Yaturkenzhensirhiv - a handheld spy

Latest Month

Июль 2018
Вс Пн Вт Ср Чт Пт Сб


RSS Atom
Разработано LiveJournal.com
Designed by chasethestars