Log in

No account? Create an account


Originally published at www.ikriv.com. Please leave any comments there.

After a trip to Europe I ended up with a mix of US and Euro coins in my wallet. Sorting them out proved surprisingly easy: if you can quickly figure out the denomination by looking at the coin, it’s a Euro coin, if not – it’s a US coin.

Euro coins
US Coins

This is not surprising, as Euro coins were introduced simultaneously in multiple countries, and nobody wanted the people to get confused, so the design was as clear as possible, whereas US coins have traditional designs. The denomination is sort of written on the coin, but it is not prominent, and in case of the 10c it proudly proclaims “one dime”. I bet very few people outside of the US know what a “dime” is. So, here we go, very easy sorting method found.


Originally published at www.ikriv.com. Please leave any comments there.

This is a continuation of the C++ series, previous post here. When I was working on the move semantics example, I ran into two familiar, yet forgotten “features” of C++ that caused me major pain:

  • Template code is not verified for correctness unless instantiated.
  • Any single-parameter constructor creates an implicit conversion.

These annoyances are not created equal: the former is much, much more painful than the latter. Modern C++ is all about templates, but you can write any old nonsense in a template: as long as it can pass the lexer, the template will compile. The real correctness check happens when (and if) the tempalte is instantiated, and in C++ it is often hard to tell when it happens and whether it happens at all, especially if we are talking about a nameless method like a constructor or an assignment operator. In practice this means that it is entirely possible to write a method with a typo, have a program compile and even run, and then discover the error days, weeks, or even months later.

For instance, if I go to matrix.h in my example and replace assignment operator of the matrix class with the following “code”

matrix& operator=(matrix const& other)
  <div style="color:red;font-size:15px">

the program still happily compiles. Assignment operator for matrices is not instantiated. Some rudimental checks are done, e.g. &nsbp; compiles, but @override does not: ‘@’ is an illegal character. I am not sure what are the rules exactly (the C++ standard costs top dollar), but anyway, the problem is not limited to typos: you can write code that could not be valid C++ under any circumstances, and it will compile.

Implicit conversions are less frequent, but sometimes more baffling. I have class logger with a constructor along these lines:

class logger
    logger(std::ostream& stream) : n_indent(0), m_stream(stream) {}

I then have two operators:

std::ostream& operator<<(std::ostream& stream, logger const& logger);
template<typename T> std::ostream& operator<<(logger const& logger, T&& value);

If I write something like

logger x(cout); 
cout << x;

I get a compilation error: two operators above are in conflict. The first one is applicable, but involves an implicit conversion of the right operand from logger to logger const&, and the second one is applicable, but involves an implicit conversion of the left operand from ostream to logger. C++ cannot choose between the two, and thus complains.

When I defined a constructor for logger, the last thing I thought of was implicit conversions. When I’ve got an ambiguity error, it took me a few moments to realize what is going on, and then I said ‘aha!’ and made the constructor explicit. Problem solved.

BUT: turning any single-parameter constructor into an implicit conversion by default was a poor language design choice. I am sure if C++ were reworked from scracth, it would have left conversions explicit by default and would have had keyword implicit to override that. But C++ is not being reworked from scratch, and thus we get those potential land mines every time we absent-mindedly define a single-parameter constructor and forget to mark it explicit. Not cool.

To conclude: I am not talking here about some abstract impurities that may make one’s life difficult in theory. These two actually did make my life difficult in the course of a few hours that took me to create the example.


Originally published at www.ikriv.com. Please leave any comments there.

This is a continuation of my previous post about rvalue references. As you may know, official C++ specification is not free: it costs upwards of $100, which, I dare to say, is a non-trivial amount of money for  any developer, even in first world countries.

This looks like a remnant of the dinosaur age in the modern world where open source is king and specification of virtually any other popular language (Java, Python, JavaScript, C#) is free to download.

The official ISO web site offers C++ standard PDF for 198 CHF ($200),  ANSI wants $232, but then only $116 if you go through a different link (amazing, eh?).

Still, even at $116 this is out of reach/unreasonable for most people. Yes, draft versions of the standard are available for free, but you never know how much they differ from the official one, and final drafts are password protected.

I saw some people going as far as to claim that one does not need the standard unless they are a language lawyer. I find this point of view condescending and, frankly, wrong. You cannot learn C++ by looking at the standard, but in quirky cases (and trust me, C++ has plenty), it is always good to consult the official law. So, what a developer can do? Cough up $116 (unlikely), find a friend who has a copy (also unlikely, but possible), obtain it illegally on some kind of shady web site, or resort to a draft hoping it was not too different from reality. None of these choices are attractive.

I remember that PDF copy of the C++ ’98 standard was available for something like $20: this was a smart move and a reasonable compromise. To wit, I actually bought a copy. While still quite expensive for many in developing countries, this was cheaper than most technical books you find on Amazon or in bookstores. I bet they made more money on this $20 standard than on the $200 one they have now.

Oh well… Going back to my free languages now. 🙂


Originally published at www.ikriv.com. Please leave any comments there.

This week-end I looked at C++ rvalue references and move semantics. Here’s the code, the good stuff is in  matrix.h.

Rvalue references seem useful, but way too treacherous. Any programming construct that raises doubts on the nature of objective reality and evokes associations with quantum physics (‘reference collapse’ ~ ‘wave function collapse’) is probably too complicated for its own good. But… maybe you don’t believe me? The great Scott Meyers explains it beautifully in his channel 9 presentation on rvalue references. I was unable to get it right on my own before I watched that.

Rvalue references can be used to reduce unnecessary data copying. I was able to create a matrix class that passes matrices by value, but avoids actually duplicating the data whenever possible. This is great, but let’s see the laundry list of gotchas I ran into:

  1. Variable of type T&& cannot be passed to a function that accepts T&& (oops!).
  2. You fix it by using forward(v) or move(v), but the difference between the two is confusing.
  3. When you see T&& in code, it may not in fact be an rvalue reference.
  4. If you are not careful, you may silently get copy behavior instead of move behavior.
  5. Overloads may not work the way you expect.

But let’s start with the good stuff.

What you can do with rvalue references.

I created a class matrix with operator+: complete source at https://github.com/ikriv/cpp-move-semantics.

Rvalue references are traditionally used to optimizing copy constructors and assignment, but they can be as useful in case of operator +. Traditional implementation of such operator for matrices would look something like this:

matrix operator+(matrix const& a, matrix const& b)
    matrix result = a;
    result += b;
    return result;

If we have an experssion like matrix result = a+f(b)+f(c)+d; the compiler will have no choice, but to create a bunch of temporary matrices and copy a lot of data around. We can significantly reduce the amount of copying by adding rvalue-based overloads of operator +() that modify an rvalue parameter “in place” instead of allocating a new matrix. The idea is that rvalue is by definition inaccessible outside the expression, and thus there is no harm in modifying it.

matrix operator+(matrix&& a, matrix const& b) // rvalue+lvalue
    return std::move(a += b);

matrix operator+(matrix const& a, matrix&& b) // lvalue+rvalue
    return std::move(b += a);

matrix operator+(matrix&& left, matrix&& right) // rvalue+rvalue
   return std::move(a += b);

This does improve the performance dramatically. With the move semantics my code allocates 8 matrices. All rvalue reference overloads are conditionally compiled via #define MOVE_SEMANTICS. If I turn that off, the number of matrices allocated jumps to 12 in release mode and to 24 in debug mode.

So, rvalue references are useful, but they are still a minefield. You must really want those optimizations, and even then if you are not careful, you may not get what you expected. To be continued…


What number comes next?

Originally published at www.ikriv.com. Please leave any comments there.

I absolutely detest the “what number/letter/picture comes next” type of puzzles. They are typically based on catching the pattern that the author had in mind when creating the puzzle. Well, guess what: ANY number/letter/picture may come next! They are all equally valid. The number of possible patterns is infinite. Some are “simpler” or “more logical” than others, but in real life the patterns are rarely simple or logical.

Making hypothesis based on limited input is an improtant trait, but guessing what hypothesis the authors had in mind is not.


WCF vs .NET Remoting

Originally published at www.ikriv.com. Please leave any comments there.

Remoting is about distributed objects, while WCF is about services. Remoting is capable of passing object instances between the client and the server, WCF is not. This distinction is much more fundamental than the technicalities like which technology is more efficient, more dead, or whether it can connect .NET to Java, but I have fully realized it only recently, when I was debating what technology to use for communication between two specific .NET apps that we had.

Remoting is conceptually similar to other distributed objects technologies like DCOM, CORBA, and Java RMI. The server holds a stateful object, which the client accesses via a proxy. The client may pass its own objects to the server, and the server may pass more objects to the client. Bidirectional connection is essential, and proxy lifetime issues are a big thing.

WCF is conceptually similar to data-exchanging technologies like HTML, REST, SQL and, to some extent, Win32 API if we ignore callbacks. The client may call the server, but not necessarily vice versa. Only plain data can be exchanged between the client and the server. There are no proxies, and thus lifetime management issues don’t exist. If the server is complex and represents multiple entities, every call must identify the entity it referes to via some kind of moniker: it could be a URL, a table name, or a numeric handle like in Win32 API.

So, Remoting and WCF are not really easily interchangeable except the simplest scenario when the server has only one object and all its methods deal only with plain data.

To illustrate, suppose you have a server that stores some kind of tree. With Remoting you can define the interface as follows:

public interface IRemotingTree
    string Data { get; set; }
    IRemotingTree AddChild(string data);

With WCF you cannot have properties or return callable object, so the interface would look something like this:

public interface IWcfTree
    string GetData(string nodeId);

    void SetData(string nodeId, string value);

    string AddChildNode(string data); // returns node id

This is where similarity with Win32 API comes in: we cannot return multiple objects from the call, so we must specify which tree node is being worked on by sending node ID. If this ID is some opaque token like 0x1234, then this would look like Win32 API. If this ID encodes a path from the root to the node, e.g. “/4/2” (second child of the 4th child of the root), then it would be more like REST.

In any case, unlike Remoting WCF is not object oriented and everything that gets passed around is plain data. What are the implications of that in real life? That’s going to be the topic of a future post.

Dumbification of web sites

Originally published at www.ikriv.com. Please leave any comments there.

In the last month or so I realized that I am having difficulty using GMail. I could not put my finger on it, until it hit me: look at the size of the reply text entry field!

It occupies just 15% of the screen height, and can barely hold 4 lines of text! I have a decent screen resolution of 1920×1080, so that can’t be the reason. Reducing the font size does not help: the window becomes even smaller.  I have more input lines of text on my cell phone than on my 24″ desktop monitor! Oh yeah, and the web page has 40 (forty) clickable thingies around the screen, half of which have nothing to do with e-mail.

I am sure it was not always so bad, it must have gotten worse over time. And GMail is not alone, I see other web sites developing stupid usability issues as well. I am not sure what to make of it. Is Web becoming too complicated for developers to program? Is it the result of cost cutting and outsourcing? Or we have too many browsing devices that are too different? Or simply nobody gives a damn anymore? It’s hard to tell…


Originally published at www.ikriv.com. Please leave any comments there.

After upgrading to a newer version of Unity, we found that our process began to fail with StackOverflowException. Investigation showed that it happened because the following registration behaves differently depending on the version of Unity:

container.RegisterType<TInterface, TType>(new InjectionFactory(...));

Github project ikriv/UnityDecoratorBug explores the behavior for different versions. Strictly speaking, the registration is self-contradictory: it may mean either

// (1) Resolve TInterface to whatever is returned by the InjectionFactory
container.RegisterType<TInterface>(new InjectionFactory(...));


// (2) Resolve TInterface to container-built instance of TType
container.RegisterType<TInterface, TType>();

Early versions of Unity ignored the TType part and used interpretation #1. In the release notes for version 5.2.1 it was declared that such registrations are no longer allowed, but in reality they were silently interpreted as (2), ignoring the InjectionFactory part.

This led to a catastrophic problem in our code. We had a decorator type similar to this:

public class Decorator : IService
    public Decorator(IService service) {...}

and the registration declaration was

container.RegisterType<IService, Decorator>(
new InjectionFactory(c => new Decorator(c.Resolve<ConcreteService>())));

If the injection factory is ignored, this leads to an infinite recursion when resolving IService: IService is resolved to Decorator, that accepts IService as a parameter, that is resolved to Decorator, that needs IService as a paramter, that is resolved to Decorator

In Unity version 5.7.0 this was fixed and such registrations throw an InvalidOperationException.

To recap:

Unity version Effect of the registration
<=5.2.0 Injection factory used
5.2.1 to 5.6.1 Injection factory ignored, StackOverflowException for decorators
>=5.7.0 InvalidOperationException

Lessons learnt:

  • If something is not allowed, it must throw an exception
  • Silently changing interpretation of existing code is evil.

PS. This is probably the first time when stackoverflow.com actually lived to its name: usually I search for other kinds of exceptions there. 🙂

Originally published at www.ikriv.com. Please leave any comments there.

This Friday I wrote a unit test and to my astonishment I have found that “a” < “A” < “ab”, with .NET InvariantCulture and InvariantCultureIgnoreCase string comparers.

That means that .NET string sorting is not lexicographical, that came as a shock to me. If it were lexicographical, the order would have been “a” < “ab” < “A”.

If the strings differ by more than just case, both case sensitive and case insensitive comparers (except Ordinal, see below) will return the same result. Any of the strings in [“ab”, “aB”, “Ab”, “AB”] will be less than any of [“ac”, “aC”, “Ac”, “AC”].

For strings that differ only by case, insensitive comparers return “equal” and sensitive comparers maintain order, so you get “ab” < “aB” < “Ab” < “AB”.

This produces a “natural” sorting where “google” and “Google” are close to each other, but it is not lexicographical. Consider unsorted  input “google, Google, human, zebra, Antwerp”. Lexicographically it would sort as “google, human, zebra, Antwerp, Google”, while most .NET comparers would sort it as “Antwerp, google, Google, human, zebra”.

StringComparer.Ordinal and StringCoparer.OrdinalIgnoreCase stand out: these two are truly lexicographical. Also, they put capitals before small letters, because this is the order in which they appear in UNICODE. I created a little application that sorts strings using various comparers:


a, ab, aB, ac, A, AB, Ab

Sorted with StringComparer.InvariantCulture:
a, A, ab, aB, Ab, AB, ac

Sorted with StringComparer.InvariantCultureIgnoreCase:
a, A, ab, aB, AB, Ab, ac

Sorted with StrnigComparer.Ordinal:
A, AB, Ab, a, aB, ab, ac

a, A, ab, aB, AB, Ab, ac

The lesson learnt: never assume anything unless verified. I’ve been working with .NET for over 10 years, and I never doubted that string comparison is lexicographical (what else could it possibly be?). I was up for a big surprise.



Квота на женщин

Я прочитал, что в Калифорнии приняли закон, который обязывает публичные компании включать правильный процент женщин в совет директоров. Пока не 50%, но разнарядка уже есть. А почему, собственно, только женщины? Чтобы никому не было обидно, надо внести закон, чтобы на каждые 10 членов советов директоров было как минимум 3 женщины, 2 мексиканца и один негр. И один чукча - ну чисто так, по приколу. Или, там, индеец. Степень мексиканистости и негритянскости будет определять специальная комиссия по антропологическим данным при помощи циркуля и колориметра, измеряя параметры черепа и цвет кожи. Если член совета директоров недостаточно темный или нос у него недостаточно широкий - его надо будет уволить и нанять расово правильного кандидата. А то ишь, сосвсем распоясались, никакого уважения к меньшинствам.

Cheap M-amp-Ms!

Originally published at www.ikriv.com. Please leave any comments there.

HTML entities are taking over. Here’s what I saw today in our local Target:


Originally published at www.ikriv.com. Please leave any comments there.

I was testing various sorting algorithms, so I had a base class that SortTestBase that contained actual tests and accepted the algorithm in the constructor, and derived classes that supplied the algorithm.

Resharper test runner insisted on executing tests from the “naked” base class and then complained that “No suitable constructor was found”. I fixed this by making the base class abstract. Fortunately, Resharper test runner does not go as far as to try to instantiate abstract classes.

    public abstract class SortTestBase
        private readonly Action<int[]> _sort;

        protected SortTestBase(Action<int[]> sort)
            _sort = sort;

        public void EmptyArray()
            var arr = new int[0];
            CollectionAssert.AreEqual(new int[0], arr);

        ... more tests ...

    public class QuickSortTest : SortTestBase
        public QuickSortTest() : base(QuickSort.Sort)


Where is my 3D Earth?

Originally published at www.ikriv.com. Please leave any comments there.

I heard all the buzz about 3D Earth view in Google Maps, but it did not work in my Chrome browser at home. I initially thought Google is doing some A/B testing, but then I found it works in Edge from the same IP. It turns out, 3D Earth requires Hardware Acceleration to be enabled in Chrome advanced settings. Once you enable it, all is good.


Originally published at www.ikriv.com. Please leave any comments there.

A few days ago I ran into a situation when I forgot to add --no-ff when merging a feature branch into master. The result was a mess.

State before the final merge:
Before merge

State after switching to master and doing git merge feature (bad state!):
merge without --no-ff

State after donig git merge --no-ff feature instead (good state):
merge with --no-ff

The flow of events was as follows:

  1. Create ‘feature’ branch off master
  2. Add some commits to feature and to master
  3. Merge latest master into feature to resolve any conflicts
  4. Work on the feature a little more

Now the feature is ready to be merged back into master. Since “feature” has all commits that “master” has plus some more, git by default will simply fast-forward “master” to be the same as “feature”, and the feature branch will effectively befcome the new master. It does not look good: feature and master commits are now mixed in the main branch line. The annoying part is that if it has already been pushed to origin, it is not easy to fix without force-pushes, and force-pushes are not always enabled. The end result is that you get very confusing commit history for the rest of the life of the project.

If you do use --no-ff, the history is preserved much better: all feature commits are kept in a side branch and only the merge is in the main line.

Click to download the batch file that demonstrates the problem (rename from .txt. It creates the “bad” case by default. Pass it --no-ff argument to create history without fast forward.


Я пришел к выводу, что с дело с русской орфографией обстоит из рук вон плохо, и надо что-то менять, приближать орфографию к живому языку. Современная орфография - это минное поле, на котором даже профессионалы с высшим образованием регулярно делают ошибки, а "средний" человек, кажеться, и во все неимеет шансов выплыть на верх (4 ошибки на 6 слов) и тонет под волнами мягких знаков в возвратных глаголах, приставок-предлогов и идиотских чередований типа раст-рост.

Что именно делать, я не знаю, я не лингвист, но это же ужос какойто.

How to do console output in WPF

Originally published at www.ikriv.com. Please leave any comments there.

If you wish to access standard input and output in your WPF app, you will have to compile it as a Console app and fire up the WPF application object manually.

Sample application: https://github.com/ikriv/WpfStdOut.

Attaching to the parent process console, as proposed by this CSharp411 article (thanks to Roel van Lisdonk for the pointer) insists on writing to the actual console and does not work well with redirected output.

Steps to apply to your application:

  1. Create WPF application as you would normally.
  2. You can use Console.Out.WriteLine(), but it would not go anywhere.
  3. Add Program.cs file with the following boilerplate code:
    public static void Main(string[] args)
        var app = new App();
  4. Right click on the project, and choose Properties→Application.
  5. Change “Startup object” to YourNamespace.Program.
  6. Change “Output type” to “Console Application”>.

The downside is that when you run the WPF app from a shortcut, it will show a black console window in the background. This window can be hidden as described in this StackOverflow article.

This is not a major problem though, since the purpose of the console output is to either show it in the command line window, or to redirect it to a file or some stream. Push come to shove, one can create a start that launches the “console” application with a detached console (ProcessStartInfo.CreateNoWindow).


Insane Apache RewriteRules

Originally published at www.ikriv.com. Please leave any comments there.

I have just spent a few fun hours fighting Apache rewrite rules in an .htaccess file. It turns out that temporary redirects just work, but permanent redirects require RewriteBase! Frankly, this is insane.

Suppose, I have my web site test.ikriv.com that lives in /var/www/html on the server disk. Let’s say I have the following contents in /var/www/html/.htaccess:

RewriteEngine on
RewriteRule ^(.*)\.temp$ $1.html
RewriteRule ^(.*)\.perm$ $1.html [R=permanent]

If type in the browser https://test.ikriv.com/file.temp, I see the contents of /var/www/html/file.html, as expected. The server does not report the redirect back to the browser, but quietly serves file.html instead of file.temp.

If, however, I type https://test.ikriv.com/file.perm I get redirected to https://test.ikriv.com/var/www/html/file.html and then get error 404, telling me such file is not available.

The root cause seems to be that in both cases mod_rewrite converts the incoming URL into var/www/html/file.html. In the first case it just serves this file, which works great, but in the second case it returns the file path to the browser, who interprets that as a URL, and it all goes downhill from there. To fix the problem, I must add RewriteBase to the .htaccess file. To be fair, Apache docs do say it is required, but they don’t say it is required only for permanent redirects. Most sane people would assume that if temporary redirects work, permanent redirects will work as well, but it is not the case.

RewriteEngine on
RewriteBase /
RewriteRule ^(.*)\.temp$ $1.html
RewriteRule ^(.*)\.perm$ $1.html [R=permanent]

Note that I need to specify the base URL of the directory. I.e., if I have .htaccess file in some other directory, I must add RewriteBase /that/dir/url. This is quite inconvenient, because

a) as I move directories around, RewriteBase may become broken, and
b) if a directory can be reached via multiple URLs, I will have to pick one as RewriteBase, with possible security issues and other inconveniences.

Why can’t we just make things work without surprises? That’s obviously, a rhetoric question: one’s person surprise is another person completely obvious and logical behavior.


Вместо эпиграфа анекдот:
- Микола, пойдем жидов бить!
- А если они нас побьют?
- А нас-то за что?!


В статье обсуждается как злой фейсбук банит женщин за "невинные" комментарии в стиле "все мужики козлы". Особый упор делается на то, что ничего плохого на самом деле женщины сказать не хотели, просто очень уж их мужики достали. Вот скажите мне, почему борцы за всяческое равноправие всякий раз начинают истерически визжать "а нас-то за что", когда их правила вдруг начинатю применять к ним, любимым? Может, не равноправия они хотели, а чего-то еще?
Фразу из заголовка достойна Дживса, хотя пренадлежит простому автомеханику. Большинству из нас, Джумшутов, никогда не достичь такого уровня владения английским языком. Но запомнить готовую фразу никто нам помешать не помешает, n'est pas?

Сама история, впрочем, куда интереснее фразы. Похоже, в Америке есть места покруче Брайтон Бич.


Cloning an Azure virtual machine (long)

Originally published at www.ikriv.com. Please leave any comments there.

I needed to move my virtual machine from one MSDN subscription to another. There is a “Move” option in Azure portal, but it works only if you can access both subscriptions from the same account. Alas, I was not so lucky, so I had to do it “the old way”.

The general idea is to copy the virtual disk (VHD) of the old virtual machine to a blob managed by the new subscription and create a VM from it. Unfortunately, there is no “create from custom VHD” option in Azure portal. It can only create virtual machines from predefined images like Ubuntu or Windows Server. So, one has to use Powershell. I followed this guide, but had to do some modifications.

  1. Stop the source virtual machine.
  2. Use AzCopy to copy VHD blob to a container under the new subscription.
  3. Create Subnet, Virtual Network, IP address, Network Security Group, and NIC as described in the guide. You will need an SSH rule instead of the RDP rule, but you can configure it later using the Azure Portal.
  4. Create the OS disk using the command below. DO NOT use the one in the guide.
  5. Continue to create the VM as in the guide.

OS Disk

Azure Virtual Machines have tons of properties, but the most difficult part is the OS Disk. I’ve got a lot of weird error messages before I was done. The magic command is

$vm = Set-AzureRmVMOSDisk -VM $vm -DiskSizeInGB 128 -CreateOption Attach -Linux -Caching ReadWrite -VhdUri $mydiskUri -Name $mydiskname

Strange Errors

It is critical to specify -CreateOption Attach and give it a name. My first attempt actually created a Virtual Machine in a “failed” state, with a message, I quote

OS Provisioning for VM 'vmname' did not finish in the allotted time. However, the VM guest agent was detected running. This suggests the guest OS has not been properly prepared to be used as a VM image (with CreateOption=FromImage). To resolve this issue, either use the VHD as is with CreateOption=Attach or prepare it properly for use as an image.

I also saw a

Cannot attach an existing OS disk if the VM is created from a platform or user image.


New-AzureRmVM : The entity name 'osDisk.name' is invalid according to its validation rule:

The root cause of the last one was that the OSDisk name was empty, but hey, why not show a cool regular expression instead? Microsoft is usually pretty good with error messages, but these ones were not really helpful.

Getting to see both subscriptions under same account

Microsoft account system is highly confusing.  The set of resources you see is determined by your login (naturally), whether it’s a work or personal account, and by “directory”. “Work” account does not mean much, somehow my “work” account is still authenticated on the Microsoft server, but that’s another story. There are all kinds of limitations what subscriptions can be seen/administered by whom. I ended up with quite a few permutations:

  • Email = my_personal_email, Work=false, Directory=org1; sees subscription 1
  • Email = my_personal_email, Work=true, Directory=ikriv.com; sees nothing
  • Email = my_personal_email, Work=true, Directory=org2; sees subscription 2
  • Email = my_org1_email, Work=false, Directory=org1; sees subscription 1
  • Email = my_org2_email, Work=false, Directory=org2; sees subscription 2

The only permutation I was unable to get is to have one account that would see both subscription 1 and subscription 2, so I could not use “move” functionality of Azure portal.

Copying your VHD

Azure VM disks are stored as simple blobs in Azure storage, so anyone can copy them as long as they have the blob container access keys. No account information is required.

You cannot copy the VHD while the source VM is running. VHDs tend to be tens of Gigabytes in size. To avoid paying for traffic, you want to keep the destination in the same data center as the source, and you want to use AzCopy utility as opposed to just downloading the data to your machine and uploading it back to Azure. NB: AzCopy is not part of the Azure SDK, it is, at the time of writing, a part of “Azure Storage Tools” download. I am not adding the link here, because it will probably be obsolete in a year or two. Things move fast in Azureland.

Creating the destination VM

Since creating my actual VM took several tries, I am just pasting the commands together. I haven’t tested this script, so use it as a starting point. Be careful to double check all names before running. It is almost impossible to rename anything in Azure once created. E.g. I now have a security group literally named “myNsg” 🙂

$rgpName = 'myResourceGroup' 
$vmName = 'myVM'
$location = 'eastus' 
$vhdUri = 'https://...'
$subnetName = 'mySubNet'
$vnetName = "myVnetName"
$nsgName = "myNsg"
$ipName = "myIP"
$nicName = "myNicName"
$vmName = "myVM"
$vmSize = "Standard_D1_v2"

$singleSubnet = New-AzureRmVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix

$vnet = New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $location `
    -AddressPrefix -Subnet $singleSubnet

$rdpRule = New-AzureRmNetworkSecurityRuleConfig -Name myRdpRule -Description "Allow RDP" `
    -Access Allow -Protocol Tcp -Direction Inbound -Priority 110 `
    -SourceAddressPrefix Internet -SourcePortRange * `
    -DestinationAddressPrefix * -DestinationPortRange 3389

$nsg = New-AzureRmNetworkSecurityGroup -ResourceGroupName $rgName -Location $location `
    -Name $nsgName -SecurityRules $rdpRule

$pip = New-AzureRmPublicIpAddress -Name $ipName -ResourceGroupName $rgName -Location $location -AllocationMethod Static

$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $Name `
    -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id -NetworkSecurityGroupId $nsg.Id

$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize

$vm = Set-AzureRmVMOSDisk -VM $vm -DiskSizeInGB 128 -CreateOption Attach -Linux -Caching ReadWrite -VhdUri $mydisk -Name "ikrivblog3"

$vm = New-AzureRmVMConfig -VMName $vmName -VMSize $vmSize
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
$vm = Set-AzureRmVMOSDisk -VM $vm -DiskSizeInGB 128 -CreateOption Attach -Linux -Caching ReadWrite -VhdUri $vhdUri
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm


The world of Azure continues to be confusing and moving fast. You routinely find obsolete screenshots of web pages that no longer exist or look quite different now, and references to commands that no longer work. In a way, Microsoft has adopted the Linux-y development model of “do something now, make it look good later”, which does not always yields the best results. In the end of the day I was able to accomplish the task, but it took me a few hours to get there. Perhaps in the future they will add an option to simply create a VM from VHD and be done with it.

I also wonder what is the way to create N identical virtual machines that perform some kind of  network distributed task, but that’s a matter for another conversation.



Веселый носорог
Yaturkenzhensirhiv - a handheld spy

Latest Month

Январь 2019
Вс Пн Вт Ср Чт Пт Сб


RSS Atom
Разработано LiveJournal.com
Designed by chasethestars