Well, I’ll give it to Microsoft: maybe they have an angle with IoT and enterprise. (Not to mention a rather creepy one.) In the end, though, only time will tell.
WARNING: This post is a commentary. If you’re looking for anything technically insightful, you might be disappointed.
Out of curiosity, I attended an Azure training event a few days ago with a group of colleagues at Microsoft’s Reactor in NYC. Though I’ve played with Azure a few times, I could definitely use more insight. (Since it’s been ages I’ve been to a training event, I also wondered if I was missing out on anything. I’ll admit that the food has gotten better.) Before I start, I commend Microsoft for having made leaps and strides in various respects, particularly with Linux and open source. Also, I should commend the staff who conducted the event. They were very helpful and considerate, and I can only imagine the difficulty in a situation where you are dealing with a room full of disgruntled developers. So, they performed well despite given constraints. Now, the actual event on other hand…that’s a different story. In many ways, it reminded me of my general impression of Microsoft these days: sometimes getting a little too ahead of themselves.
So, the general idea was a good one, introducing people to Azure using a hypothetically fun scenario. In this case, the scenario depicted was one where you help save astronauts on Mars, all through a series of printed-out “classified” tutorials on contemporary topics (IoT, serverless computing, etc.). Okay, decent concept. But the devil is in the details…First, the event space didn’t have enough bandwidth to accommodate the laptops of several dozen people. (As soon as I saw that there were no LAN jacks and that it was WiFi-only, I knew that we were in trouble. It’s common knowledge that between Windows updates and NuGet packages for project builds, you’re gonna need plenty of bandwidth.) Second, the presented materials were sometimes confusing (hyperlinks on printed paper, etc.) and were loosely tied to the general theme. For example, one exercise had us utilizing their face recognition API, which is separate from Azure for some reason. Exactly how this helps stranded astronauts and why we were using pictures of college students, I couldn’t tell you. Finally, the tutorials themselves were simple exercises of copying code and hitting menu options in order to showcase certain technological features.
However, there was no tutorial that gave a general idea of what Azure is really about: being a cloud platform. The basic framework was ignored (spinning up a VM instance, loading a new database, etc.) in order to tout its more niche features. Personally, though, I think that the goal of the marketing team should have been more practical and focused when planning this event. For example, use a theme of a skunkworks team within a larger organization. How could such a team leverage Azure to become innovative? In other words, provide inspiration and ammunition to your base of enterprise users that doesn’t currently use the cloud yet. Though I understand the marketing team’s goal: to appeal to both enterprise AND startup customers for Azure. Obviously, management has envisioned that as the strategy to overtake AWS.
Which, obviously, I think might be a mistake. For example, how many people with Arduinos and Raspberry Pis are looking to create IoT products with .NET Core? True, I’m not an expert, but after spending a few minutes with it, I still can’t see it winning too many hearts and minds. Instead of spreading their resources too thin, it might be beneficial to double-down on their bread and butter, especially since there’s already so much competition in other verticals. After all, there are plenty of enterprise customers to still win over. In fact, I can think of one or two opportunities in niche spaces that are hidden among enterprise users. I know that I don’t have the mile-high vantage point, so I’m not privy to certain details…but since I’m leery of wobbly ladders, I tend to prefer low-hanging fruit. 🙂
Well, I was patient, and I waited a year to see how Xamarin would integrate with Office 365. I was hoping for some new libraries and some new tutorials, so that I could eventually build that killer enterprise Xamarin app. Honestly, it would be nice to have a METAmessage for Android, which could offer the ability to customize the alerting functionality on your phone…but, alas, it seems that I’ll asphyxiate myself if I keep holding my breath.
So, I capitulated and just reverted to using IFTTT, so that it’ll just call me in specific cases. It’s not the ideal alert system, but it’s better than nothing. (Though I will admit that it’s fun to hear the automated voice of IFTTT as it reads my ridiculous excuse of an alert.)
So, there’s been so much talk about AI lately, and in particular, there’s a great deal of interest in bots. No, not the Mirai kind (which hopefully isn’t plentiful in the future, despite its Japanese translation). No, I’m talking about the friendly, enterprise kind. You know, the chatbots on Facebook that are supposed to be helpful snippets of AI, capable of booking hotel rooms for you. Of course, I don’t really understand the usefulness of these bots, since there’s no way that a bot could help me find the ideal room faster than my own investigation. In fact, they seem kinda…well…dumb. But these bots are probably not aimed at a self-appointed pariah of social media like myself. Instead, it’s probably meant for those people who are younger (i.e., millenials) and who are more predictable (troves of available marketing data via Facebook, less variety of purchases, etc.). In that case, I suppose that it’s useful for some but not for myself…or is it?
Similar to my reaction to chatbots, I never quite understood the newfound love for Slack. It’s a messenger app…so what? However, as I started to delve more into it, I started to understand its appeal through its extensible functionality, especially to developers. I can create a simple bot (or a basic web service) on my public-facing servers, so I can use Slack to talk with it on my phone and get the status of machines and processes? Okay…that’s kinda cool. (Assuming that your company and networking department embraces the idea of allocating machines just for this purpose. Trust me, I know…that can be a hard sell.) So, maybe, must maybe, I could be down with these chatbots. That way I could use Slack (or Skype) and be hip like the cool kids!
Hmmm…so how I could I actually pitch this one to the brass? Curious, I looked to see if there was already an enterprise version of such a solution, and though I did find one or two, they seemed to be costly and less flexible than desired. So why not just build one cheaply on my own? Since I recently read something about Microsoft’s nascent bot framework and its integration with Skype, I figured that I could start there as a quick way to prototype. After proceeding through a few quick tutorials, it became obvious that a chatbot is nothing more than a tailored RESTful web service, and with that realization, I quickly assembled and got working the prototype that I had in mind.
However, over the next few weeks, I started to realize that it wasn’t viable. One, since this framework is too young to even stand on its own wobbly legs, Microsoft keeps updating the framework and breaking my stable prototypes. As with previous experiences when dealing with a Microsoft gestation, I wondered again if Redmond’s new projects (along with their frenetic and seemingly bipolar updates) are victims of Conway’s Law…Two, I read about how Skype does not and will not support third-party bots that are not publicly registered in their Bot Directory. I’m fairly sure nearly all of the company brass would have a problem with a publicly available chatbot that tells the status of our internal servers. Just a hunch.
After taking a quick look at other platforms, I came away with similar impressions. In the end, I’d say that chatbots are like a lot of new tech these days: lots of potential but some distance away from ultimately being practical.
Over at The Outline, Adrianne Jeffries has written a piece about the all-too-common banality of the interview process for software developers and engineers. You know the drill: invoke bubble sort from memory, balance this binary tree, etc. You know, all of that stuff from college that you did once and never repeated in your career.
Well, as it turns out, there appears to be a growing movement of developers on Twitter addressing and ridiculing that very subject (including notables like David Heinemeier Hansson). I’ve heard a few humorous stories from friends who have undergone that very process with Google and Amazon (though with no serious intentions of working for them); the most amusing anecdotes come from those who have challenged the interview process, getting only confusion and irritation from the interviewer in response.
In any case, after holding the same viewpoint for years, it’s good to know about the multitudes who are on the same page. Hopefully, all this press will create a different mindset, ushering in a new era for the interview process…
But I wouldn’t bet on it.
So, after my last post, I got curious: is there any software out there that performs XML schema evolution, even if it’s proprietary? Oddly, after searching for a few minutes, the answer “no” seemed to be coming back from the web. Now, Oracle and IBM do offer a service to update your current XML documents according to a new schema…but only if it doesn’t invalidate the old schema. Basically, their “evolution” functionality allows you to further refine your schema’s rules, like changing the maximum/minimum of a tag’s occurrence or adding a new required tag. That’s hardly any sort of evolution; it doesn’t even provide the ability to automatically rename tags/properties like Avro! So, the claims of Oracle and IBM might be more marketing than engineering.
But I guess that marketing and buzzwords are all too normal in software…After all, whoever coined the term string interpolation definitely took some severe liberties, since it’s sure a long way off from real interpolation. In any case, there seems to be an opening for a niche market here, one which could be somewhat lucrative. However, these days, all the big bets of towering chips are on the table of machine learning, big data, and AI. In the eyes of the major league, anything that deals with XML (i.e., old-school data processing) should go play the slot machines.
Good for me…I don’t mind being stuck alone in a dark corner! Reminds me of playing Street Fighter 2 by myself in the back of a pizza parlor and having a blast…In any case, I was looking for tools that could help build an engine for XML schema evolution. Interestingly, I found an open source project by Dmitry Pekar that can convert both ways between XML and Avro. That could help by extending the functionality already in Avro…but besides the simple renaming of tags/properties, it doesn’t satisfy my proposed requirements. (Plus, your distributed architecture would have to ultimately use Avro, which would be a refactoring headache in some instances.) I haven’t found anything else yet, which makes me suspect that my handcrafted MDD approach might be the only viable option.
Well, as I said before, I’d get back to metadata-driven design…and here we are!
So, as I was perusing InfoQ one day a few weeks ago, I stumbled upon an interesting video by Vinicius Carvalho of Pivotal. Basically, within the video, Vinicius (okay, I’ll admit it – it’s a cool name that I wish I had) addresses an issue familiar to anyone who creates web services : how does one evolve a payload’s schema without breaking the clients of users who referenced the old schema? For example, if we were returning a payload with a property/tag called ‘Price’ and if we wanted a new version of the schema to replace that tag with ‘PubPrice’, how could we do that without requiring every user to change their client/consumer app? These kinds of presentations are my favorites, since they address real-world problems.
So, in his presentation, Vinicius goes about demonstrating how one can create a solution to such a dilemma. Since he does work for Pivotal, he uses the Spring platform to present a scenario where a web service has an original schema that needs to be altered in its eventual evolution. (Granted, he’s probably a fan of the Spring framework, which means that he’s a fan of event-driven frameworks…but we won’t hold that against him. I’m kidding, I’m kidding…take it easy, Spring zealots.) For the first few minutes, Vinicius focuses on format, which is important when discussing web servers that return payloads. (And, yes, I agree with him: JSON is bad, mmmkay.) Even though I’ve never used it since we require verbosity (i.e., XML/JSON) from my stakeholders, he does make a compelling case for the Avro protocol; it does appear to be an impressive format, one that can be very powerful in capable hands.
However, the most interesting part is when Vinicius begins to talk about the actual mechanism for resolving the focus of this presentation: schema servers and their registries of schema versions. Basically, using features within Avro, a schema server allows the creator of a web service to register their original schema and any subsequent versions of it. When a new version of the payload’s schema is conceived, the new schema can be submitted to the schema server in order to test whether it breaks (i.e., is not backward-compatible with) any older versions of the schema. Plus, accompanied with markup language, the new schema can indicate any tags which will replace tags existing in the previous version. So, when an older client submits data to the updated web service, the web service can use the schema server as a translation device. Nifty!
I love this solution, and I want to commend Vinicius and his colleagues for sharing a solution to a common problem. However, what if we wanted to evolve this solution for more complex scenarios, where the changes to the payload are more involved? For example, what if we wanted to split one tag into several others? Or what if we wanted to replace an entire composite with another one? In this case, simple markup language wouldn’t be sufficient enough to indicate how the schema server could transform the data from one form to another. In this scenario, you would need to create a tool that could help you define such transformations systematically, and you would need the right methodology in order to build it. You might know where I’m going with this one…Yes, I think that this is where the application of MDD could produce the guts of the schema server and make it even more powerful!
Given, if we stayed with Avro, it would be difficult to create such a translation service; the functionality built into Avro is likely difficult to extend (if even possible). However, if we use XML (which, in my line of work, we are most apt to use) as our API payload’s vehicle, we could use something that Vinicius mentions in his talk: XSLT. Even though I can’t say that I never cursed when using it, it can be a helpful tool in certain cases…and in the case of creating a MDD server for schema translation, it fits perfectly! Using MDD, we could create a schema server that could generate the appropriate XSLT and then perform complex conversions from one XML schema to another. I have a few ideas on how to make such a thing work…but that’s for another time.