Интервью /

Interview with Marc Lehmann (November 2013)

How and when did you learn to program?

I started programming when I was around nine years old or so, with some 8-bit board, the type where you enter instructions in hex. My parents "heard" that computers are "bad for your development" and were reluctant to buy a "real" computer, but that board passed their test. Luckily, a few years later they bought me a C-128, and shortly thereafter, an Amiga 500, which was the ideal computer for learning OS design, low-level coding and a lot more: The earlier computers mostly allowed me to understand hardware and very low-level programming, but the Amiga OS was as advanced as a modern OS kernel. When the Amiga became too old and too slow (despite the 68030 upgrade board), I switched to the much inferior PC+DOS platform, and in 1993, to HP/UX and shortly after that to GNU/Linux.

As for languages, I started with machine language, switched to BASIC and 6502, Modula-2 and m68k, then to Turbo Pascal + x86, and finally to Perl + C on HP/UX and GNU/Linux.

I never stopped learning how to program, of course — just this year I finally sat down to properly learn Javascript (as opposed to fool around around with it), and I still learn new ways of programming in Perl or C every year, even though these two languages are with me for a long time now. Perl is especially good at providing surprising new insights.

As to how — mostly I read the manuals that came with whatever language or computer I had, and worked from there. While studying informatics I learned a great deal of things, but I don't think I learned any programming, so I learned most things though trying and failing on my own.

The most important way I learned (and still learn) new things is by reading other people's code. I think I probably read 20 times more code than I write. Or maybe the ratio is even higher.

These days, I also often learn new languages by reading the relevant language specs to work around all the bad information you can find on the 'net — reading random blogs for basic info and then the spec to correct the misinformation works quite well for me.

What editor do you use?

VIM, exclusively so. I actually tried hard to learn Emacs, even getting the "Learning GNU/Emacs" book — I normally don't try to learn from books. To this date, I can still tell at what page in that book my brain suffered a fatal reality exception and rebooted. Put differently, even though I might have preferred Emacs for being the cooler editor, it turned out that my brain is incompatible with it in a very fundamental way, and I am stuck with VI for probably the rest of my life.

This is also why I think any editor wars miss the point: It's your brain that is VI-like or Emacs-like, and no matter how good the other editor is, it might not work for you at all.

As a sidenote, before I used VIM I used joe for a while — coming from Turbo Pascal/Turbo C, joe was great because it used the same keybindings. The reason I mention joe, however, is that it still is occasionally useful because of a single killer feature: It can edit files that do not fit into memory. So when you need to interactively edit this 60GB text file — joe does it for you.

When and how have you been introduced to Perl?

That was in 1993 on HP/UX — I was looking for a nice language to start on Unix, and was overwhelmed by the Internet and the free availability of information, "standards" (RFCs), and the fact that almost everything (configuration files, protocols such as FTP) on Unix and the Internet was text-based.

Perl 4 came with good documentation (and for free, too :), and that's how I got accustomed to it. And when I switched to perl 5 in 1995, I simply read all the man pages that came with it in order, was amazed, and stayed with it ever after.

My first Perl program was a curses ftp client that could download files in the background — a very useful feature when the average download speed was 200 bytes/s. After it became popular it allowed me to do some security research (by gaining access to other people's accounts, strictly for educational purposes of course). When the university found out, they actually offered me a job, which, I guess, was good for my development after all.

What are other programming languages you enjoy working with?

He, "enjoy" is such a rubbery concept :)

I often pair Perl with C++ (if possible) or C (if not). I really love XS for its power and relative simplicity (in results, not in learning it), so I could describe my main language as "Perl+XS".

And while I dabble around in a lot of languages (for fun and profit), I couldn't say I really enjoy them. I regularly sustain posix shell, various assembler dialects and javascript, and avoid looking at php/python/ruby/... code at all costs, with mixed success.

As for truly enjoy, Perl is the only language I am afraid.

What do you think is the most strongest Perl advantage?

I am not sure Perl has clear advantages anymore, but for me, the thing that makes me prefer Perl over other languages in its category is the core language itself and the malleability of the interpreter.

One example is Coro — the fact that you can add such a thing as Coro as a mere extension to an unpatched perl binary is outright astonishing to me. Attempts to do this kind of thing to other languages such as Python usually end up with a rewrite or fork of the interpreter.

There are also many "micro-features" that don't look big, but really are. An example here would be __END__ — it doesn't look like a big feature, but it can be used to elegantly bootstrap a perl process over, say, an ssh connection. AnyEvent::Fork::Remote does this for example, but I used this many times in the past in commercial projects. Other languages such as python have no equivalent to __END__, so implementing AnyEvent::Fork::Remote there would require ugly workarounds, probably involving temporary files and other hacks. In Perl the solution is simple and elegant, and Perl is full of these small but occasionally super useful features.

CPAN for example is great, and for many years definitely was a major selling point, but it's not something that other languages don't have these days, and the code quality of many modules is severely lacking.

Still, CPAN, the language, the malleability all come together in the one language that is Perl, and that is a pretty unique combination.

What do you think is the most important feature of the languages of the future?

I don't think I have a good answer to that. I hope languages of the future let me do what I want to do and don't get in my way.

As for the near future, I would hope that languages would allow easier data sharing between processes on the same and different nodes. Of course, I am working on that in Perl as well (my todo list is very long — libev/EV was on my todo list for more than a decade before it acquired the required priority), but I am not there yet.

As for the far future, I suspect future languages will be more graphical, mind-controlled (and possibly mind-controlling), and so annoying that I will probably still code in this antique Perl language while everybody else has nice and shiny quantum computers built into their head that listen to their subconscious thoughts and write software on their own. If the economy permits it, that is.

But, seriously, it definitely would be nice if I would just have to explain my problem to the computer (by having a silent conversation using abstract thoughts), and the machine would do the work of implementing and debugging it...

You have written several event loop related modules that became very popular. Why did you decide to make them open and why do you think they became so widespread?

That is a difficult question — it's rather easier to say what the reasons aren't: I don't go for publicity and I don't go for fame.

I think publishing what you write is the natural and default state of affairs. Altruism plays a major role — when I genuinely think something might be useful (as in, "I wish somebody else had done it already"), and I am allowed to, I feel I just NEED to publish it for the common good. After all, that's what I have seen other people do as well, and it works — if people do publish, other people will be encouraged to do so as well, which is good for everybody.

I try to document my modules mostly because otherwise they wouldn't be that useful to anybody else. It's the extra work that is needed to make something actually useful in practise, as opposed to just being theoretically available. Documentation is the only part of my software that I write for others — the code I write because I have use for it myself.

As for why some of my modules became so widespread — I can only guess, as I usually don't care, to the point of not really knowing which of my modules are popular or not.

With AnyEvent, I had a collaborator who thought AnyEvent is so useful, it should be better known, and this is why we tried to provide some basic "must have" modules, such as AnyEvent::HTTP. Whether that helped is hard to tell for me, because I didn't try very hard, even the extra modules I wrote were chosen because they were useful to me.

Maybe they became popular because I studied event loops for almost two decades, silently complaining about various problems and lack of certain features, and as a result, tried to provide everything that I missed and considered essential, while not making the interface too complicated.

In fact, you can see this evolution in AnyEvent and EVAnyEvent has an interface more like the Event module (using methods). Later, EV taught me that a much more minimal interface suffices (function calls with only a few fixed parameters), and thus the AE interface was retrofitted into AnyEvent.

In the end, I hope my popular modules became so because they helped somebody solve a problem, just as they generally help me solve a problem (or two, or three). I fancy that it is me striving for quality that makes them useful, but I only ever think that in secret :)

Is it possible to shortly describe why Coro is the only real threads in Perl?

Apparently not shortly, no.

The defining feature of threads in other languages (such as Python or C) is that address space (i.e. addressable code and data) is shared between threads. When you don't share this, you end up with something called "processes".

The "competing thread model" in Perl, ithreads, uses such threads in C to emulate Unix processes in Perl. On the Perl level, you end up with a (very buggy) process model, and sharing variables or code between ithreads is about as (in-)efficient and unnatural as sharing them between real processes, except real processes don't have to emulate the MMU in software. Sharing of objects and code isn't even implemented (array or hash-based objects come out empty after passing them to other threads). Sharing existing data structures isn't possible at all, and so on.

Thus, the "only real threads in Perl" statement wasn't meant to be controversial, but a rather obvious tag line, but in past years, there have been recurrent events where people define threads as something else and then contest the validity of the statement.

Basically all such cases are variations of "a thread is something that runs in parallel with other threads, so Coro aren't real theads, ithreads are", without realising that this would instantly a) rule out pthreads, python threads, ruby threads and most other thread systems, as they often cannot run in parallel either and b) would mean processes are threads as well, which makes this definition not very useful (we already have a word for process — "process").

While processes are certainly threads of execution, the common meaning when people refer to threads in imperative programming languages is to mean "multiple threads of execution share the same address space", and thus, Coro are the only real threads in Perl, sharing both code and data naturally.

What was the reason behind writing common::sense?

I always wanted to have some useful warnings, but a lot of warnings in Perl are just getting in your (my?) way, or are fatally flawed in some minor detail, making the whole warning useless.

Not having common::sense either meant duplicating its effects in basically all of my modules and programs, or not having any warnings at all.

So the reason behind writing common::sense is maintainability - unbundle code from various modules that need it and put it into a common library so it can be shared. If you look at the documentation for common::sense, you can see that the code isn't trivial and has changed multiple times. Without common sense, I would have had to make new releases of most of my modules each time something changes, with no benefit at all to the users.

What was the story with JSON::XS and Perl hash key ordering?

There are certainly multiple valid viewpoints on this topic, but here is mine:

Many years ago, there was a bug in CGI.pm that could trigger resource starvation, and for some reason, the perl 5 porters thought that making core perl a bit less useful was better than actually fixing that bug, or having resource limits in place to catch such exploits.

I thought that a bit strange at the time, after all, other languages with dictionary data types do not generally randomise them for security reasons, and thus C++ and other languages suffer from the same "security issue" (or at least fix the actual bug, namely resource starvation, not the symptoms).

Nevertheless, the documentation said that the ordering would be the same within one program run, and that was certainly true enough for more than a hundred modules to rely on it (there wasn't much breakage on CPAN at the time, as very few programs relied on hash ordering between multiple runs, probably because that never was a given anyways).

More recently, this randomisation apparently was found lacking (or broken), and a different system was implemented, where even the same hash would end up with a different order, within a single program run, something not actually needed to work around the original security issue.

There was no depreciation cycle — documentation and code was changed, and (fortunately) a lot of patches for modules were created, as otherwise, a lot of vitally important modules (i.e. not JSON::XS, but e.g. LWP) would no longer run with the next perl release without patches.

My involvement was in questioning the need for this drastic step — while a real security issue would probably justify that, I found the reasons were somewhat lacking (the few other languages that implement randomisation do it the "old" and dependable way, and there was no actual security exploit known), and to date, nobody I asked had an answer as to why the actual bug isn't being fixed (or checked for).

The only answer I got (from the perl pumpkin) amounted to: "That change fixes the issue, and we have no better patch at this time." This is reasonable, but is merely after-the-fact.

It's not the first "feature" that breaks existing code without a good reason in recent perl releases, and maybe it's just me, but I value bugfixes and stability much higher than cool new features.

In fact, Perl has "recently" acquired a much faster release cycle — roughly one major release per year. Unfortunately, backwards compatibility has effectively become a non-priority, so nowadays I have to deal with breakage regularly, which means I have to invest a lot more time in my modules, just to work around the latest incompatible change. It doesn't help that bug reports abut incompatible changes are often ridiculed or downplayed, and all that caused me to post some harsh criticism of the process.

When you follow perl since the 5.000 days, the difference between stability/backwards compatibility of Perl in the past, and recurrent breakage of Perl nowadays is pretty striking.

Can you give any examples when App::Staticperl might be useful?

I often send customers a staticperl-compiled executable in addition to the perl sources — often they don't know Perl, and requiring them to install a lot of modules would simply not compute. With a working staticperl setup, creating the binary is hardly more work than running the program in the first place, and even if the perosn who receives the program is good at installing dependencies, being able to instantly run the program without any extra work is just nice.

On systems that allow actual static linking (e.g. not glibc and not anything windows), you can even create binaries that "run everywhere without dependencies" — for example, I sometimes distribute GNU/Linux binaries that run on any x86 or amd64 system with a reasonably new linux kernel, regardless of libc or other details (alpine linux is a good starting point for building such binaries).

Sometimes customers don't want to know about Perl, and staticperl enables me to sneak in perl into a binary or shared library without the customer to freak out. It's amazing how fast and capable Perl is if it doesn't have to suffer from any stigmas because users think the program is written in C :)

It's also quite nice when you want to embed perl in an executable without having to have a lot of extra files. staticperl creates a .h and a .c file for me, and when I compile and link these and libperl into my program, I have a full-featured Perl interpreter plus Perl library and custom modules, without any external files and without any dependency on a writable filesystem. All in your own C program.

The practical relevance of this is that I can often embed perl in cases where I could normally only embed lua.

I also happen to sometimes test out new weird Perl build options or versions quickly using staticperl. That's just me, though: staticperl wasn't meant for that. I hear a lot of good things about perlbrew, so you should try that first if your goal is to have many perl binaries.

Now, this isn't advertising — all these features come with a hefty price tag. A few very broken modules (mainly Module::Build) make staticperl less than straightforward and only an option for experts, and having to understand how Perl builds and is configured, and what the difference between static linking and a static binary is can be more than most people want or need to know. And often I feel I am the only person left on this world who is embedding Perl on a regular basis (it feels natural to me, but apparently not so to others).

Once you are past that point, though, doing standalone binaries with staticperl are serious fun, and quite a bit faster and easier (for me) to create than e.g. with PAR::Packer. And they have a much higher chance of actually running, in my (biased) experience — if PAR::Packer would have worked reliably for me in the past there wouldn't be any App::Staticperl or Urlader nowadays.

(As for advertising, Urlader + Perl::LibExtractor are yet another option to bundle perl, which is what I actually use on Windows. It is still faster and more reliable than PAR::Packer, but works with any program, not just Perl, and — to me — is conceptually simpler).

You seem to heavily optimize your software. Is this usually a requirement or just a habit?

Actually I thought most of my software isn't very optimised, but I understand why it looks that way.

First, I do have considerable experience not just with programming, but also with what is fast and what isn't, and I indeed have trouble writing things inefficiently if, with little or no extra effort, I can do it better. Therefore, many of the optimisations you see are due to coding habit or because I wanted to avoid the pain of not writing good — according to my standards — code, when it doesn't result in some obvious gain.

Secondly, I heavily optimize basic and often-used functions (libraries, modules and so on) so when I write the actual program that uses these libraries later I can chose to write clear, small, and possibly not so efficient code.

Look at it that way: You can write your app in C, or you can write it in Perl. All else equal, in C it will be faster, but it's just such a pain to do it.

That's why you might want to write it in Perl — it's convenient, and since somebody else has heavily optimised Perl in C, it's probably fast enough.

The maxim at work here is "use the most convenient language that is still fast enough", and having heavily optimised C libs for Perl makes it possible to choose Perl more often for the convenient parts.

If you combine "use the right language for the job" with Perl+C, then you got everything covered usually, because Perl is good at things that C isn't, and vice versa, resulting in the best of both worlds.

So in reality I am actually too lazy to optimise my software, and this is why I apply optimisations where they matter most. This allows me to slack off and write less efficient but simple and straightforward high-level code.

However, optimisation always has lower priority than correctness.

Where do you work right now? Is Perl you primary language?

I work at nethype GmbH in Germany, a company I founded together with a friend while we were both studying together.

Luckily, Perl is our main language, which we use for anything ranging from millisecond, distributed domain trading or implementing high-performance DHCP or radius servers, to mundane things such as SMTP crawlers, web applications or even online games such as Deliantra — the full works, basically. We are also in the lucky position to often be able to publish modules that we developed for work as free software.

While C or C++ often aren't far away, Perl is the perfect language to write high level logic in it, not least because with XS you always have easy access to C to implement the few time critical parts.

Do you enjoy visiting Perl conferences?

Absolutely! Mostly for the ability to talk to other Perl freaks (and also normal attendees :).

However, making my actual body move to actually attend one is another story — I am really just lazy, which is why I attend even fewer conferences nowadays than in the past. All the planning effort, and then you have to fly or drive (and prepare a talk)... too much effort (I also do avoid the US these days, for complicated political reasons).

I have never regretted it when I forced myself, but it might require some coaxing.

What are your thoughts on Perl future?

By 2050 me and the other two people still using Perl make a lot of money maintaining the existing cobol^Wperl programs that the rest of humanity depends on to fight their (vain) fight against global warming, with me ending up rich and the rest of the world ending up dead and starving, until electricity runs out and my money becomes useless and we are all fucked.

Ok, sorry, I mixed my future in there.

Realistically, I think Perl 6 will to continue its slow and necessary death, giving Perl even more breathing space to survive and probably even grow.

I don't think any improvements (if they are improvements) to the Perl core will have much effect on the future of Perl — Perl is decidedly good enough, and CPAN is quite good as well.

I do think Perl isn't "fashionable", and I think that this and the fact that Perl is no longer alone in it's category will make its importance shrink, because fewer people will learn Perl, or see it as a "main language".

For me, that's not bad, and quite natural. I think the strength of Perl is not in it being innovative, but being dependable and stable. That isn't good for growth, but Perl might still be around and useful when other, more fashionable, languages are long forgotten.

However, I am not driving the Perl development process, so my predictions might be totally off and my wishes might not be shared by the people who shape the actual future of Perl.

Specifically, I would personally wish that backwards compatibility, and keeping CPAN in a working state by not continuously breaking modules would gain more priority.

This is far more important to me than iplementing new featurss in the perl interpreter — perl is so flexible that new features can often be implemented in a module without breaking stuff, and that's a better way to drive innovation.

So, I believe Perl is safe from extinction, will keep it's place among other popular languages, and can, in theory, even grow while keeping the enourmous body of existing code working.

Should we encourage young people to learn Perl right now?

Absolutely — it is a great and useful language. One might argue that Java is better for Money-Oriented-Programming (MOP), and C++ programmers might be more commonly sought after on the job market, but apart from a slightly harder job search due to it being more exotic, there is nothing wrong with Perl as a first or second programming language, and it's just so much more fun to code Perl than Java, which is good for your health.

So, when somebody would like to do something with ruby, python or a similar "scripting" language, suggesting Perl as another option will not do them any disservice: Giving "young" people the option of choosing between multiple languages enables them to choose the one they feel most comfortable with, and that is what counts.

Imagine I wouldn't have tried VI because I was told it was old fashioned or so — I'd be eternally unhappy with GNU Emacs by now. Shudder.

There is a minor problem with that though: Sometimes I get asked how to learn Perl, and I usually am at a loss there, having learnt Perl from reading its manual and source code, which probably isn't the most painless way to approach it. Looking at actual books, I find that many books are not written for beginners — "Learning Perl" from O'Reilly for example explicitly says it doesn't teach you programming, and most other books I have looked at kind of only teach Perl as a second language (or are simply crap). The one exception I found is "Beginning Perl" by Simon Cozens (and it's free to use, too!).

I think it's difficult to recommend Perl as a first language when most books treat it only as a second language, so make sure you have looked at a few books before recommending Perl, so you can recommend a suitable one.

But there is nothing wrong with the Perl language itself.

Questions from our readers

Why do you still use CVS?

Short answer: it works.

Long answer:

CVS still does everything I need quite well. Git for example requires more commands to do the same things that CVS does with checkout/update/commit and is a lot of trouble when working in a tight-knit group (while being far preferable to CVS when working in a more distributed fashion!). SVN really is just worse than CVS (same command structure, similar limitations, but longer URLs and this weird "copying" model that decidedly isn't how my brain works). If I had to, I would probably look into Mercurial, which looks like a saner alternative to git, should CVS not serve my needs anymore.

I am not religious about CVS (but it's astonishing how many people are religious about git and harass me to switch to it), it just happens to serve my needs. Why change something that works well for you: The time I can save by using CVS can be used more productively elsewhere.

Also, commit ids and tools such as cvsps bring CVS to about the same level as other systems these days, fixing most limitations, so the difference isn't as big as it once was.

You are famous for expressing your opinion in a harsh way. Is this intentional?

Yes and no — I usually have opinions about things or concepts, not about people, so I don't quite see why I should hold back — it's not as if I can hurt the feelings of a thing or concept. I can also always substantiate my opinions, so I don't feel that I need to hold back (even when wrong, I made a reasonable attempt of being right, as opposed to lightly making things up and having unsubstantiated opinions on everything). Both of these help to be able to have, and state, strong opinions, and be rarely wrong. It also helps to be able to shut your mouth on topics you don't know much about.

I also have the impression that I am not actually that famous there — the few people which bring this topic up usually were told about it by (some very few) other people without having seen any evidence themselves. In fact, the only people I see who publicly make this claim are disgruntled fans whose patches I didn't apply...

This impression is supported by my experiences with "non-Perl-communities", where I am not famous at all for this behaviour (I have a few popular non-Perl libraries and programs...).

Do you consider yourself a part of Perl community?

That critically depends on how you define "Perl community".

On the first Perl Workshop in Germany, I was astonished to see the enormous diversity of white-collar people, nerds, hobbyists and so on, who had little to no thing in common — except Perl. Did they form a community? I don't know, but I doubt it.

I contribute to CPAN, and consider me part of that "community". I don't consider me part of the "modern perl movement" community (if such a thing even exists).

So I really don't know what the "Perl community" is supposed to be, and I suspect there isn't a single one, but many different ones.

One thing I can say for certain though: most of my projects are not "community projects", or "community driven projects". I do develop software for my own needs, and that often means I will not provide a feature that I will not use myself (nor be able to maintain). This is the reason why I usually develop things to be extensible by other people — that allows me to decline feature requests.

Have things improved with rt.cpan.org?

Things have actually gotten worse. There are some cosmetic changes (such as displaying a notice on a subset of pages), but overall the effect is that rt.cpan.org makes it even harder to avoid it. The core problem is there just as much as in the beginning: It forces itself on other people's code and cannot be disabled (last checked: beginning 2013).

So, there has been an effort on the side of rt.cpan.org to hide the problem, but no effort to actually improve the situation. It feels like a game that I don't want to play. It's pretty annoying, really, just as being force-subscribed to some microsoft list or so, except you can't ignore it as spam because there are other people being victimised as well.

I just wished there was a simple setting to disable rt.cpan.org for my modules or change it into a mail forwarder, but the powers that are clearly are not interested in this, making it hard to configure it, and impossible to disable.

Is OpenCL Perl bindings are stable enough? What are advantages over using this library within Perl, rather than C/C++?

I haven't seen a need for a new release in the last 1.5 years, and it still works reliably enough for my usage. I suspect more work will come when NVidia finally supports OpenCL 1.2 (or, wow, maybe even 2.0), so I can improve support for that (I still have only a 1.1 driver to test against).

And the advantage of using OpenCL in Perl rather than C/C++ is that you can use it from Perl, silly :->

Why does AnyEvent treat IO::Async in a special way?

Short answer: Because the other backends don't work with IO::Async.

Long answer: IO::Async implements its own event loop (rather than being a framework that sits on top of one — it is a framework, but also comes with its own event loop). For every incompatible event loop there needs to be a custom backend — just as there is one for EV or Event, there needs to be one for IO::Async.

If IO::Async would only use other event loops (e.g. via AnyEvent, or directly using EV etc.), then no such backend would be needed. AnyEvent would then use the same backend as IO::Async and would mostly coexist peacefully.

(To the very technically inclined: normally, there would only need to be an interface to IO::Async::Loop, which implements the IO::Async event loop, but that module is the rare case of an event lib that doesn't support sharing between multiple independent users, so to be able to work with IO::Async, AnyEvent unfortunately needs to go via IO::Async (and still needs some extra manual setup, as IO::Async has the same design issue — multiple independent IO::Async users cannot share it, so you need to tell AnyEvent about the instance of IO::Async you wish it to use)).

Apart from that, IO::Async isn't treated specially, i.e. it's treated specially, but in the normal way for AnyEvent.

There is some controversial code that explicitly disables itself when used with another module though, named IO::Async::Loop::AnyEvent. That module is not needed to use IO::Async, nor is it part of IO::Async, but maybe this is what you wanted to refer to in your question.

The history of that module is that AnyEvent sits above IO::Async as an event loop, but this module tries to turn the whole thing on its head by putting IO::Async on top of AnyEvent, which then sits on top of IO::Async. This leads to endless recursion or other obvious or subtle problems, and did eventually lead to bug reports ending up in my inbox.

I mailed the IO::Async author about this problem and explained how IO::Async can properly use AnyEvent, but never received an answer (the author later publicly admitted that he ignored my mail because he didn't care).

This is why AnyEvent fails when that module is loaded, rather than causing subtle and hard to debug errors later — first, I don't want to deal witht he bogus error reports, and second, I want to protect my users from silent data corruption or endless debugging sessions.

The really ugly and most painful detail of this story, however, is the fact that the author of POE used this to start a public smear campaign against me, fooling people who can't be bothered to look at the evidence — a perl 5 maintainer even asked for CPAN to remove my modules (and later apologised for saying that without actually looking at the code themselves), and this is the only way I can conceive of how this question came to be: To say it clearly, AnyEvent does not and did never treat IO::Async differently than other backends, except where necessary due to differences in the API.

Summary: despite some ugliness, AnyEvent continues to support IO::Async as best as it can: two modules that use IO::Async cannot transparently coexist with each other without special care, which is up to the unfortunate module user to implement.

As long as you work around this design issue with AnyEvent::Impl::IOAsync::set_loop, AnyEvent users will work just as magically as with other modules, and IO::Async is not treated differently than other backends in any way.

What kind of a "Thank you" do you prefer? Litres or bottles?

I have trouble with any kind of "Thank you", because most likely, whatever it is, I didn't do it for you, so I have issues accepting "thank yous" (mild Asperger syndrome probably).

If you ever are in a position where you want to thank me for a module (or anything, really), don't tell me, tell others about it — if a module was genuinely helpful in some way, help others by teaching them about it. Conversely, if you don't like my code (as opposed to my person), feel free to criticise and "unrecommend" them.

Of course I do like to hear when people use my stuff (when I published JSON::XS 2.0, I got a storm of "why did you change the API of this widely used module" mails, when in fact I was under the impression that nobody used it because nobody bothered to tell me about it — maybe I should have added more bugs to receive more feedback?), so feel free to drop me a note on how it was useful (and thank me if you must, but keep in mind that it is easier for me to deal in objective things than in feelings, and don't be angry when I simply ignore your thank you part — I just don't know how to deal with it).

Rest assured that I will continue to contribute whether or not people thank me for it, but I might be more inclined to publish something when it is useful to more people.

Interviewed by Viacheslav Tykhanovskyi (vti)