?

Log in

No account? Create an account

Originally published at www.ikriv.com. Please leave any comments there.

TL;DR: wrapping existing method DoSomething(MyData data), with an identically named generic extension method DoSomething<T>(T data) is a bad idea, since it erases compile-time type checking. DoSomething() may now be called with any type, and it may be hard to tell whether you are calling the original or the wrapper.

The following is a true story, even though some details have been changed to protect the innocent. Suppose we have a messaging library along these lines:

interface IMessage
{
    Type Type { get; }
    object Data { get; }
}

class Message : IMessage
{
    public Type Type { getset; }
    public object Data { getset; }
}

class Sender
{
    public void Send(Message message)
    {
        // ...do actual sending here...
        Console.WriteLine($"Send: Type={message.Type.Name}, Data={message.Data}");
    }
}

Expressions like sender.Send(new Message {Type = typeof(int), Data = 42}) quickly become an annoyance, so we create an extension method to simplify them:

static class SenderExtensions
{
    public static void Send<T>(this Sender sender, T data)
    {
        sender.Send(new Message {Type = typeof(T), Data = data});
    }
}

Now we can simply write sender.Send(42)and it works as expected. But… Consider this code:

var message = new Message {Type = typeof(int), Data = 42};
sender.Send(message); // prints Send: Type=Int32, Data=42

var iMessage = (IMessage) message;
sender.Send(iMessage); // oops! Type=IMessage, Data=ShootFoot1.Message

We cast our message to IMessage type, but the Send() method expects parameter of type Message. This by itself is probably a design error, but let’s ignore it. ¡No problemo! says the compiler. Passing IMessage compiles, but it calls the extension method instead of the regular one, and wraps our message in another Message object.

The same would have happened if we had a generic overload of Send(), but extension method is especially evil, since it is invisilbe. You cannot guess its existence by looking at either the call site or the definition of the Sender class. I personally marveled at the wrapped message in debugger for at least 10 minutes before I realized what’s going on. Obviously, the real program was much more complex and had many additional details that caused distraction.

Moral of the story: if you create a generic extension method like this, give it a distinct name. E.g. if the extension method were called SendObject(), then Send(iMessage)would simply have not compiled.

Complete code:
The original, buggy version
Fixed version with renamed extension method

Метки:

The Pólya Conjecture

Originally published at www.ikriv.com. Please leave any comments there.

I have finally memorized one simple mathematical statement that is true for pretty much any number you can think of, but is in fact verifyably false, with known counterexamples. The Pólya conjecture is true for any number up to about 900 million, but is false for the next 300,000 or so numbers, and is ultimately false in general.

It is well known that any integer can be represented as a product of one or more prime numbers, and only one such representation exists for each integer. E.g. 21 = 7*3, 999=37*3*3*3, etc. The Pólya conjecture states that for any integer N, at least half of the numbers between 2 and N have odd number of prime factors.

For example, for N=10 there are five integers with odd number of prime factors (2,3,5,7,8) and four integers with even number of prime factors (4,6,9,10). Number 8 has 3 prime factors, because 8=2*2*2.

The smallest counter-example for the conjecture is 906,150,257. This is a very strong philosophical argument against the statements like “we checked this for the first 10,000 numbers, this cannot possibly be false”.

Of course, everyone understands that it is possible to define a function that is true for all numbers up to SOME_BIG_NUMBER and false after that. In fact, it can be as simple as f(n) = n<SOME_BIG_NUMBER. However, we seem to intuitively believe that if the algorithm is simple, does not have a “built-in” explicit threshold or trend, and is true for the fist few thousand numbers, then it must be true for all numbers. If it is true for the first million numbers, this conviction becomes virtually absolute.

Delving too much into infinitely small probabilities and theoretical curiousities is ultimately harmful to one’s well-being, as in real life decisions must be often made quickly and with limited information. Those who ignored it had been eaten by the sabertooth tigers long time ago and did not leave any heirs.

The graph below plots the difference between even-factor numbers and odd-factor numbers for N between 2 and about 10 million. It leaves very little for imagination: even-factor numbers clearly lose.

If the Pólya conjecture were decided in the court of law, I don’t think the pro-conjecture side would have any trouble winning the case after they produce a math professor who would truthfully testify that he ran the calculations for the fisrt 10 million numbers and they are all true. And yet, the court would be wrong, mathematically speaking.

Метки:

Originally published at www.ikriv.com. Please leave any comments there.

I created an ASP.NET core project with docker support. Good thing: it works. Bad thing: the “dev” image it creates does not really contain application code. The code is added to the container at run time using local volume mappings. if you inspect the image, you will find the “/app” directory empty, and there is nothing to run there. To create a usable image one needs to do “publish” in Visual Studio. This creates a “final” image that does contain the code, can be published to DockerHub and then run on a remote machine.

Visual Studio invokes a rather complex “docker run” command to create a “dev” container from the image. In my case it is

docker run -dt -v "C:\Users\ivan\vsdbg\vs2017u5:/remote_debugger:rw" -v "C:\ivan\dev\exp\dotnetcore\AspNetJekyll:/app" -v "C:\Users\ivan\AppData\Roaming\ASP.NET\Https:/root/.aspnet/https:ro" -v "C:\Users\ivan\AppData\Roaming\Microsoft\UserSecrets:/root/.microsoft/usersecrets:ro" -v "C:\Users\ivan.nuget\packages\:/root/.nuget/fallbackpackages2" -v "C:\Program Files\dotnet\sdk\NuGetFallbackFolder:/root/.nuget/fallbackpackages" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -e "ASPNETCORE_ENVIRONMENT=Development" -e "ASPNETCORE_URLS=https://+:443;http://+:80" -e "ASPNETCORE_HTTPS_PORT=44355" -e "NUGET_PACKAGES=/root/.nuget/fallbackpackages2" -e "NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages;/root/.nuget/fallbackpackages2" -p 7159:80 -p 44355:443 --entrypoint tail aspnetjekyll:dev -f /dev/null

This command does not seem to appear in any of the project settigns, visible in the UI or in the project files. Frankly, I don’t like that level of magic, this is too opaque, and difficult to debug/change if something goes wrong.

Метки:

Unity Container and double singleton

Originally published at www.ikriv.com. Please leave any comments there.

It turns out that after re-registering type as a singleton in Unity calls to container.Resolve() return a new instance. This may lead and have led to ugly bugs if your code relies on having only one instance of the type. Proofing test:

using NUnit.Framework;
using Unity;
using Unity.Lifetime;

namespace DoubleSingletonRegistration
{
    [TestFixture]
    public class DoubleSingletonTest
    {
        public class Foo {}

        [Test]
        public void NewInstanceReturnedAfterReregistration()
        {
            var container = new UnityContainer();

            container.RegisterType<Foo>(new ContainerControlledLifetimeManager());
            var foo1 = container.Resolve<Foo>();
            var foo2 = container.Resolve<Foo>();

            container.RegisterType<Foo>(new ContainerControlledLifetimeManager());
            var foo3 = container.Resolve<Foo>();
            var foo4 = container.Resolve<Foo>();

            Assert.AreSame(foo1, foo2);
            Assert.AreNotSame(foo2, foo3); // new instance returned after re-registration!
            Assert.AreSame(foo3, foo4);
        }
    }
}

Метки:

Google killed my feedback page

Originally published at www.ikriv.com. Please leave any comments there.

Google had shut down Recaptcha v1 on March 31st, killing my feedback page in the process. I found out yesterday, fixed today. Pros: new captcha is much easier to use and much less annoying. Cons: I had to sit down and code that thing, emergency style, and the page was effectively down for 2 weeks! This is what happens to web sites that depend on external tools. Nothing is forever, even Notre Damme has burnt… I am sure Google would say that sent out warnings, but I did not get any. Just double checked my gmail account: nothing’s there. Anyhow, this is fixed now. Moving on.

Метки:

Originally published at www.ikriv.com. Please leave any comments there.

Today I spent several hours trying to find out why this code does not work:

function doit() {
return
someObject
.method1()
.method2();
}

but this one does:

function doit() {
let temp;
return temp =
someObject
.method1()
.method2();
}

It involved some rather complicated promises and what-not, so I was tossing them around, trying to figure out where it gets stuck. The answer, however, is rather simple: unfortunate code formatting. When return is on a line by itself, Javascript inserts an implicit semicolon after it, and completely ignores subsequent code, quietly returning undefined:

function doit() {
return ;
someObject;
.method1();
.method2();
}

Porting formatting habits from one language to another may be dangerous: not all whitespace is created equal. It is kind of obvious with Python, but Javascript may be tricky as well.

Of course, this happened before to other people. There is even a note in some coding conventions that reads “The return value expression must start on the same line as the return keyword in order to avoid semicolon insertion“. In fact, I knew about the implicit semicolon, but I did not connect the dots, I was too busy debugging my promises 🙂

That’s all folks 🙂

Rant about Win32 API

Originally published at www.ikriv.com. Please leave any comments there.

Coming back from .NET Framework, it feels like Win32 API was invented by some rather cold people with total disregard for the well being of the application programmers. Every other function is a minefield designed to suck time and energy from its user.

Let’s start small. GetEnvironmentVariable. When given a small or NULL buffer, it returns the number of characters in the string including the null terminating character. When given a large enough buffer, it returns number of characters in the string not including the null terminating character. Of course, there are probably good philosophical reasons for that, but for a programmer who just wants to get the environment variable and be done with it this looks like simple lack of consistency.

Next level: CreateDirectory. In .NET it creates all subdirectories recursively if necessary: if I asked for c:\foo\bar\baz, and I only have c:\foo, then c:\foo\bar would be created for me as well. Win32 requiers the application programmer to create each subdirectory separately. It means that we will have to manually parse the path, find the deepest subdirectory that is already created, and create its children, in order. This may seem like an easy task, but it’s not, especially in C or C++ where any string manipulation is pain. Breaking the path into parts is only half the problem. We also need to deal with variety of corner cases. Leading slash cannot be ignored: \foo is not the same as foo. Trailing slash usually must be ignored, but c: is different from c:\. \\foo\bar does not have a parent directory, it’s lexical parent \\foo is not a directory at all. con\foo is invalid path, since “con” cannot be used as file name. To get it all right you’ll need patience, good knowledge the corner cases, and some unit tests. It is unreasonable to demand this from every application programmer who just wants to create a directory, and to expect him to repeat this in every application.

Next level: CreateProcessW. Why does it have to write to the command line buffer? What value does it provide to the user? Was it a performance optimization? If so, it’s a bad one. We are creating a process, which typically means reading files and stuff. Copying a few hundred bytes to an internal buffer will have neglibile effect on performance in this case. Modifying otherwise logically read-only input, on the other hand, has profound implications. In particular, it causes an exception if said input resides in read-only memory, e.g. hard coded in the executable code. Small savings for Win32 developers, huge pain for everyone else.

Going even higher: CommandLineToArgvW. It is supposed to parse command line into an argument list. This is the most dangerous kind of API call: it sort of works, but not always, and fails when you need it the most. The documentation warns that the input must start with a valid file name, or the behavior may be unpredictable. The warning itself is buried deep in the documentation and is quite confusing, but that’s not an API design problem, so we will ignore it. This warning must be taken seriously. For example, given input string a.exe b" a"z, the output is ["a.exe", "ba z"], which is correct given Windows command line parsing rules. But if the input is just b" a"z, then the output is ["b\" a", "z"], and that is not correct at all. To add insult to injury, lpCmdLine passed to WinMain does not begin with the executable path. Also, where is the ANSI version? Do I not need to parse command line in ANSI programs? The bottom line is, the way it is implemented, this function is virtually useless.

.NET Framework has much fewer trapdoors of this kind, at least on the basic API level, and revisiting Win32 API makes one really thankful for that.

Метки:

The definition of truth, Russian style

Originally published at www.ikriv.com. Please leave any comments there.

Warning: this post is politically charged.

Russia has just enacted a law that forbids dessimination of “untruthful information” over the Internet. The actual legal definition is very lengthy, but it is quite broad and virtually irrelevant, as in practice it gives the government an opportunity to legally take down any information it does not like.

This is neither new nor specific to Russia, but an interesting twist is that the determiner of ultimate truth is… the Attorney General office of the Russian Federation. If I am reading the law correctly, the procedure is as follows: upon noticing untruthful information that may potentially threaten something important*, the Attorney General office informs the Communication Overseer Commission, which then orders the owner of the communication resource to take it down, or face consequences.

This is it: there is not even a court hearing. In theory, this allows the government to declare rumors that the Earth rotates around its axis dangerous to public order and forbid publishing this untruthful information under penalty of law. Of course, any government is capable of ignoring even very specific limits to its power: Russia learns from the best,

I thought American laws where, for example, “deadly weapon” may actually mean “a pillow” were too broad and overreaching. I stand humbled. There is something awe inspiring in giving the Attorney General’s office the official power to decide what is true and what is not. Perhaps that book should have been called “2019”…

*”Something important” is defined as a long list of things such as life, health, and property of the citizens, stability of public order, public safety, functioning of the industry, the energy sector, credit institutions (!), communication infrastructure, and the like. In practice any information may potentially [be declared to] negatively affect one or more of those items.

The original law (in Russian).

The modifications expanding it to “untruthful information that may potentially threaten something important” (also Russian).

Метки:

Why constructors must be simple

Originally published at www.ikriv.com. Please leave any comments there.

Recently I’ve got another reminder why constructors should be simple. The situation in a nutshell: class BaseHandler subscribes to an observable in its constructor. The handler is virtual, class DerviedHandler adds incoming items ino a list initialized in its constructor. So, there is a potential virtual function call in constructor, but there is also a race condition. If there is nothing in the observable at the time of the construction, everything works. But if there is, virtual function gets called in the constructor, leading to the derived handler to operate on a non-initialized instance, and hilarity ensues.

Complete code is below, and is also on github. If the conditions are right, the exception occurs in DerivedHandler.HandleItem(), because _list is null. What is especially annoying, _list is marked readonly, so we mentally block the idea that it may, in fact, change value over time.

Yes, C++ has Resource Aquisition is Initialization paradigm, but C++ is a different language. C++ also does not allow virtual function calls in constructors: they blow up. C# does allow virtual function calls in constructors, which is probably fine, but combined with observables this leads to funny results.

The moral of the story is that we are all better off if constructors do not attempt to do anything fancy. Perform the bare minimum initialization, and then leave the rest to the Start() or Connect() method. Yes, you’ll have to call that method explicitly, but this is a small price to pay for not having to hunt ridiculous race conditions.

// Base class, provides virtual HandleItem() method called when new item is observed
class BaseHandler : IDisposable
{
    private readonly IDisposable _subscription;

    public BaseHandler(IObservable<int> items)
    {
        _subscription = items.Subscribe(HandleItem);
    }

    public void Dispose()
    {
        _subscription.Dispose();
    }

    // default handler for observed item prints the value
    protected virtual void HandleItem(int n)
    {
        Console.WriteLine(n);
    }
}

// Derived class handles observed items by adding them to a list
class DerivedHandler : BaseHandler
{
    private readonly List<int> _items; // not initialized here on purpose

    public DerivedHandler(IObservable<int> items)
        :
        base(items)
    {
        _items = new List<int>();
    }

    protected override void HandleItem(int n)
    {
        Console.WriteLine("Adding item: {0}", n);
        _items.Add(n); // this causes NullReferenceExceptino if called from the base's constructor
    }
}

class Program
{
    static void Main()
    {
        using (var subject = new ReplaySubject<int>())
        {
            // Publish an item before constructing any handler objects; this will cause an exception in constructor
            // Comment this out to prevent the exception
            subject.OnNext(42); 

            using (new DerivedHandler(subject))
            {
                // publish a new item to process
                subject.OnNext(43);
                Console.ReadLine();
            }
        }
    }
}

Coins

Originally published at www.ikriv.com. Please leave any comments there.

After a trip to Europe I ended up with a mix of US and Euro coins in my wallet. Sorting them out proved surprisingly easy: if you can quickly figure out the denomination by looking at the coin, it’s a Euro coin, if not – it’s a US coin.

Euro coins
US Coins

This is not surprising, as Euro coins were introduced simultaneously in multiple countries, and nobody wanted the people to get confused, so the design was as clear as possible, whereas US coins have traditional designs. The denomination is sort of written on the coin, but it is not prominent, and in case of the 10c it proudly proclaims “one dime”. I bet very few people outside of the US know what a “dime” is. So, here we go, very easy sorting method found.

Метки:

Originally published at www.ikriv.com. Please leave any comments there.

This is a continuation of the series about C++ rvalue references. First post, second post, third post. Rvalue references are unique, because if you have a variable of type T&&, you cannot pass it to a function that accepts a T&&.  

class SomeClass;
void f(SomeClass&& ref);
void g(SomeClass&& ref)
{
f(ref); // error C2664: cannot convert argument 1 from 'SomeClass' to 'SomeClass &&'
}

Of course, there is a good explanation for this, but it still feels like a violation of fundamental violation of software development. If you have a function that accepts bananas, and you have a banana, you should be able to pass your banana to that function, shouldn’t you? Of course, even prior to rvalue references you could stop this from happening, e.g. by  making copy constructor private. Consider:

class SomeClass { SomeClass(SomeClass const&); /* private copy constructor */ };
void f(SomeClass c);
void g(SomeClass c)
{
f(c); // error C2248: cannot access private member declared in class 'SomeClass'
}

But that’s different, since in that case things work as expected by default, and you need to make an effort to break them. With rvalues things are weird by default.

The reason for that is that rvalue is by definition something without a name. Once we create a parameter out of it and give it a name (“ref” in our example), it stops being an rvalue. “ref” binds to an rvalue, but is in itself an lvalue. We can say ref=42; and it would be legal. Rvalue-ness is not transitive. To convert “ref” back to rvalue, we need to use either move() or forward(). E.g. this works:

class SomeClass;
void f(SomeClass&& ref);
void g(SomeClass&& ref)
{
f(std::move(ref));
}

See also: what is the difference between move() and forward()?

Метки:

Originally published at www.ikriv.com. Please leave any comments there.

This is a continuation of the C++ series, previous post here. When I was working on the move semantics example, I ran into two familiar, yet forgotten “features” of C++ that caused me major pain:

  • Template code is not verified for correctness unless instantiated.
  • Any single-parameter constructor creates an implicit conversion.

These annoyances are not created equal: the former is much, much more painful than the latter. Modern C++ is all about templates, but you can write any old nonsense in a template: as long as it can pass the lexer, the template will compile. The real correctness check happens when (and if) the tempalte is instantiated, and in C++ it is often hard to tell when it happens and whether it happens at all, especially if we are talking about a nameless method like a constructor or an assignment operator. In practice this means that it is entirely possible to write a method with a typo, have a program compile and even run, and then discover the error days, weeks, or even months later.

For instance, if I go to matrix.h in my example and replace assignment operator of the matrix class with the following “code”

matrix& operator=(matrix const& other)
{
  <div style="color:red;font-size:15px">
    FREE PIZZA FOR ALL!!!
  </div>
}

the program still happily compiles. Assignment operator for matrices is not instantiated. Some rudimental checks are done, e.g. &nsbp; compiles, but @override does not: ‘@’ is an illegal character. I am not sure what are the rules exactly (the C++ standard costs top dollar), but anyway, the problem is not limited to typos: you can write code that could not be valid C++ under any circumstances, and it will compile.

Implicit conversions are less frequent, but sometimes more baffling. I have class logger with a constructor along these lines:

class logger
{
public:
    logger(std::ostream& stream) : n_indent(0), m_stream(stream) {}
    ...
}

I then have two operators:

std::ostream& operator<<(std::ostream& stream, logger const& logger);
template<typename T> std::ostream& operator<<(logger const& logger, T&& value);

If I write something like

logger x(cout); 
cout << x;

I get a compilation error: two operators above are in conflict. The first one is applicable, but involves an implicit conversion of the right operand from logger to logger const&, and the second one is applicable, but involves an implicit conversion of the left operand from ostream to logger. C++ cannot choose between the two, and thus complains.

When I defined a constructor for logger, the last thing I thought of was implicit conversions. When I’ve got an ambiguity error, it took me a few moments to realize what is going on, and then I said ‘aha!’ and made the constructor explicit. Problem solved.

BUT: turning any single-parameter constructor into an implicit conversion by default was a poor language design choice. I am sure if C++ were reworked from scracth, it would have left conversions explicit by default and would have had keyword implicit to override that. But C++ is not being reworked from scratch, and thus we get those potential land mines every time we absent-mindedly define a single-parameter constructor and forget to mark it explicit. Not cool.

To conclude: I am not talking here about some abstract impurities that may make one’s life difficult in theory. These two actually did make my life difficult in the course of a few hours that took me to create the example.

Метки:

Originally published at www.ikriv.com. Please leave any comments there.

This is a continuation of my previous post about rvalue references. As you may know, official C++ specification is not free: it costs upwards of $100, which, I dare to say, is a non-trivial amount of money for  any developer, even in first world countries.

This looks like a remnant of the dinosaur age in the modern world where open source is king and specification of virtually any other popular language (Java, Python, JavaScript, C#) is free to download.

The official ISO web site offers C++ standard PDF for 198 CHF ($200),  ANSI wants $232, but then only $116 if you go through a different link (amazing, eh?).

Still, even at $116 this is out of reach/unreasonable for most people. Yes, draft versions of the standard are available for free, but you never know how much they differ from the official one, and final drafts are password protected.

I saw some people going as far as to claim that one does not need the standard unless they are a language lawyer. I find this point of view condescending and, frankly, wrong. You cannot learn C++ by looking at the standard, but in quirky cases (and trust me, C++ has plenty), it is always good to consult the official law. So, what a developer can do? Cough up $116 (unlikely), find a friend who has a copy (also unlikely, but possible), obtain it illegally on some kind of shady web site, or resort to a draft hoping it was not too different from reality. None of these choices are attractive.

I remember that PDF copy of the C++ ’98 standard was available for something like $20: this was a smart move and a reasonable compromise. To wit, I actually bought a copy. While still quite expensive for many in developing countries, this was cheaper than most technical books you find on Amazon or in bookstores. I bet they made more money on this $20 standard than on the $200 one they have now.

Oh well… Going back to my free languages now. 🙂

Метки:

Originally published at www.ikriv.com. Please leave any comments there.

This week-end I looked at C++ rvalue references and move semantics. Here’s the code, the good stuff is in  matrix.h.

Rvalue references seem useful, but way too treacherous. Any programming construct that raises doubts on the nature of objective reality and evokes associations with quantum physics (‘reference collapse’ ~ ‘wave function collapse’) is probably too complicated for its own good. But… maybe you don’t believe me? The great Scott Meyers explains it beautifully in his channel 9 presentation on rvalue references. I was unable to get it right on my own before I watched that.

Rvalue references can be used to reduce unnecessary data copying. I was able to create a matrix class that passes matrices by value, but avoids actually duplicating the data whenever possible. This is great, but let’s see the laundry list of gotchas I ran into:

  1. Variable of type T&& cannot be passed to a function that accepts T&& (oops!).
  2. You fix it by using forward(v) or move(v), but the difference between the two is confusing.
  3. When you see T&& in code, it may not in fact be an rvalue reference.
  4. If you are not careful, you may silently get copy behavior instead of move behavior.
  5. Overloads may not work the way you expect.

But let’s start with the good stuff.

What you can do with rvalue references.

I created a class matrix with operator+: complete source at https://github.com/ikriv/cpp-move-semantics.

Rvalue references are traditionally used to optimizing copy constructors and assignment, but they can be as useful in case of operator +. Traditional implementation of such operator for matrices would look something like this:

matrix operator+(matrix const& a, matrix const& b)
{
    matrix result = a;
    result += b;
    return result;
}

If we have an experssion like matrix result = a+f(b)+f(c)+d; the compiler will have no choice, but to create a bunch of temporary matrices and copy a lot of data around. We can significantly reduce the amount of copying by adding rvalue-based overloads of operator +() that modify an rvalue parameter “in place” instead of allocating a new matrix. The idea is that rvalue is by definition inaccessible outside the expression, and thus there is no harm in modifying it.

matrix operator+(matrix&& a, matrix const& b) // rvalue+lvalue
{
    return std::move(a += b);
}

matrix operator+(matrix const& a, matrix&& b) // lvalue+rvalue
{
    return std::move(b += a);
}

matrix operator+(matrix&& left, matrix&& right) // rvalue+rvalue
{
   return std::move(a += b);
}

This does improve the performance dramatically. With the move semantics my code allocates 8 matrices. All rvalue reference overloads are conditionally compiled via #define MOVE_SEMANTICS. If I turn that off, the number of matrices allocated jumps to 12 in release mode and to 24 in debug mode.

So, rvalue references are useful, but they are still a minefield. You must really want those optimizations, and even then if you are not careful, you may not get what you expected. To be continued…

Метки:

What number comes next?

Originally published at www.ikriv.com. Please leave any comments there.

I absolutely detest the “what number/letter/picture comes next” type of puzzles. They are typically based on catching the pattern that the author had in mind when creating the puzzle. Well, guess what: ANY number/letter/picture may come next! They are all equally valid. The number of possible patterns is infinite. Some are “simpler” or “more logical” than others, but in real life the patterns are rarely simple or logical.

Making hypothesis based on limited input is an improtant trait, but guessing what hypothesis the authors had in mind is not.

Метки:

WCF vs .NET Remoting

Originally published at www.ikriv.com. Please leave any comments there.

Remoting is about distributed objects, while WCF is about services. Remoting is capable of passing object instances between the client and the server, WCF is not. This distinction is much more fundamental than the technicalities like which technology is more efficient, more dead, or whether it can connect .NET to Java, but I have fully realized it only recently, when I was debating what technology to use for communication between two specific .NET apps that we had.

Remoting is conceptually similar to other distributed objects technologies like DCOM, CORBA, and Java RMI. The server holds a stateful object, which the client accesses via a proxy. The client may pass its own objects to the server, and the server may pass more objects to the client. Bidirectional connection is essential, and proxy lifetime issues are a big thing.

WCF is conceptually similar to data-exchanging technologies like HTML, REST, SQL and, to some extent, Win32 API if we ignore callbacks. The client may call the server, but not necessarily vice versa. Only plain data can be exchanged between the client and the server. There are no proxies, and thus lifetime management issues don’t exist. If the server is complex and represents multiple entities, every call must identify the entity it referes to via some kind of moniker: it could be a URL, a table name, or a numeric handle like in Win32 API.

So, Remoting and WCF are not really easily interchangeable except the simplest scenario when the server has only one object and all its methods deal only with plain data.

To illustrate, suppose you have a server that stores some kind of tree. With Remoting you can define the interface as follows:

public interface IRemotingTree
{
    string Data { get; set; }
    IRemotingTree AddChild(string data);
}

With WCF you cannot have properties or return callable object, so the interface would look something like this:

[ServiceContract]
public interface IWcfTree
{
    [OperationContract]
    string GetData(string nodeId);

    [OperationContract]
    void SetData(string nodeId, string value);

    [OperationContract]
    string AddChildNode(string data); // returns node id
}

This is where similarity with Win32 API comes in: we cannot return multiple objects from the call, so we must specify which tree node is being worked on by sending node ID. If this ID is some opaque token like 0x1234, then this would look like Win32 API. If this ID encodes a path from the root to the node, e.g. “/4/2” (second child of the 4th child of the root), then it would be more like REST.

In any case, unlike Remoting WCF is not object oriented and everything that gets passed around is plain data. What are the implications of that in real life? That’s going to be the topic of a future post.

Dumbification of web sites

Originally published at www.ikriv.com. Please leave any comments there.

In the last month or so I realized that I am having difficulty using GMail. I could not put my finger on it, until it hit me: look at the size of the reply text entry field!

It occupies just 15% of the screen height, and can barely hold 4 lines of text! I have a decent screen resolution of 1920×1080, so that can’t be the reason. Reducing the font size does not help: the window becomes even smaller.  I have more input lines of text on my cell phone than on my 24″ desktop monitor! Oh yeah, and the web page has 40 (forty) clickable thingies around the screen, half of which have nothing to do with e-mail.

I am sure it was not always so bad, it must have gotten worse over time. And GMail is not alone, I see other web sites developing stupid usability issues as well. I am not sure what to make of it. Is Web becoming too complicated for developers to program? Is it the result of cost cutting and outsourcing? Or we have too many browsing devices that are too different? Or simply nobody gives a damn anymore? It’s hard to tell…

Метки:

Originally published at www.ikriv.com. Please leave any comments there.

After upgrading to a newer version of Unity, we found that our process began to fail with StackOverflowException. Investigation showed that it happened because the following registration behaves differently depending on the version of Unity:

container.RegisterType<TInterface, TType>(new InjectionFactory(...));

Github project ikriv/UnityDecoratorBug explores the behavior for different versions. Strictly speaking, the registration is self-contradictory: it may mean either

// (1) Resolve TInterface to whatever is returned by the InjectionFactory
container.RegisterType<TInterface>(new InjectionFactory(...));

or

// (2) Resolve TInterface to container-built instance of TType
container.RegisterType<TInterface, TType>();

Early versions of Unity ignored the TType part and used interpretation #1. In the release notes for version 5.2.1 it was declared that such registrations are no longer allowed, but in reality they were silently interpreted as (2), ignoring the InjectionFactory part.

This led to a catastrophic problem in our code. We had a decorator type similar to this:

public class Decorator : IService
{
    public Decorator(IService service) {...}
}

and the registration declaration was

container.RegisterType<IService, Decorator>(
new InjectionFactory(c => new Decorator(c.Resolve<ConcreteService>())));

If the injection factory is ignored, this leads to an infinite recursion when resolving IService: IService is resolved to Decorator, that accepts IService as a parameter, that is resolved to Decorator, that needs IService as a paramter, that is resolved to Decorator

In Unity version 5.7.0 this was fixed and such registrations throw an InvalidOperationException.

To recap:

Unity version Effect of the registration
<=5.2.0 Injection factory used
5.2.1 to 5.6.1 Injection factory ignored, StackOverflowException for decorators
>=5.7.0 InvalidOperationException

Lessons learnt:

  • If something is not allowed, it must throw an exception
  • Silently changing interpretation of existing code is evil.

PS. This is probably the first time when stackoverflow.com actually lived to its name: usually I search for other kinds of exceptions there. 🙂

Originally published at www.ikriv.com. Please leave any comments there.

This Friday I wrote a unit test and to my astonishment I have found that “a” < “A” < “ab”, with .NET InvariantCulture and InvariantCultureIgnoreCase string comparers.

That means that .NET string sorting is not lexicographical, that came as a shock to me. If it were lexicographical, the order would have been “a” < “ab” < “A”.

If the strings differ by more than just case, both case sensitive and case insensitive comparers (except Ordinal, see below) will return the same result. Any of the strings in [“ab”, “aB”, “Ab”, “AB”] will be less than any of [“ac”, “aC”, “Ac”, “AC”].

For strings that differ only by case, insensitive comparers return “equal” and sensitive comparers maintain order, so you get “ab” < “aB” < “Ab” < “AB”.

This produces a “natural” sorting where “google” and “Google” are close to each other, but it is not lexicographical. Consider unsorted  input “google, Google, human, zebra, Antwerp”. Lexicographically it would sort as “google, human, zebra, Antwerp, Google”, while most .NET comparers would sort it as “Antwerp, google, Google, human, zebra”.

StringComparer.Ordinal and StringCoparer.OrdinalIgnoreCase stand out: these two are truly lexicographical. Also, they put capitals before small letters, because this is the order in which they appear in UNICODE. I created a little application that sorts strings using various comparers:

https://github.com/ikriv/DotNetSortingTest

Input:
a, ab, aB, ac, A, AB, Ab

Sorted with StringComparer.InvariantCulture:
a, A, ab, aB, Ab, AB, ac

Sorted with StringComparer.InvariantCultureIgnoreCase:
a, A, ab, aB, AB, Ab, ac

Sorted with StrnigComparer.Ordinal:
A, AB, Ab, a, aB, ab, ac

StringComparer.OrdinalIgnoreCase:
a, A, ab, aB, AB, Ab, ac

The lesson learnt: never assume anything unless verified. I’ve been working with .NET for over 10 years, and I never doubted that string comparison is lexicographical (what else could it possibly be?). I was up for a big surprise.

 

Метки:

Квота на женщин

Я прочитал, что в Калифорнии приняли закон, который обязывает публичные компании включать правильный процент женщин в совет директоров. Пока не 50%, но разнарядка уже есть. А почему, собственно, только женщины? Чтобы никому не было обидно, надо внести закон, чтобы на каждые 10 членов советов директоров было как минимум 3 женщины, 2 мексиканца и один негр. И один чукча - ну чисто так, по приколу. Или, там, индеец. Степень мексиканистости и негритянскости будет определять специальная комиссия по антропологическим данным при помощи циркуля и колориметра, измеряя параметры черепа и цвет кожи. Если член совета директоров недостаточно темный или нос у него недостаточно широкий - его надо будет уволить и нанять расово правильного кандидата. А то ишь, сосвсем распоясались, никакого уважения к меньшинствам.

Profile

Веселый носорог
yatur
Yaturkenzhensirhiv - a handheld spy
Website

Latest Month

Май 2019
Вс Пн Вт Ср Чт Пт Сб
   1234
567891011
12131415161718
19202122232425
262728293031 

Syndicate

RSS Atom
Разработано LiveJournal.com
Designed by chasethestars