This was a hard choice – it took many months to reach the conclusion this is what I needed to do.
Most people have gone thru strong programming: they think you have to be ‘successful’ at something. Success is externally defined, anyhow (as opposed to satisfaction which we define ourselves) and therefore you are supposed to study in college a certain field, then use that at work to build your career in the same field… and keep doing the same thing.
I was never like that – I didn’t go to college, I didn’t study as an ‘engineer’. I just saw there was a market opportunity to find a job when I started, studied on the job, eventually excelled at it. But it never was *the* road. It just was one road; it has served me well so far, but it was just one thing I tried, and it worked out. How did it start? As a pre-teen, I had been interested in computers, then left that for a while, did ‘normal’ high school (in Italy at the time, this was really non-technological), then I tried to study sociology for a little bit – I really enjoyed the Cultural Anthropology lessons there, and we were smoking good weed with some folks outside of the university, but I really could not be asked to spend the following 5 or 10 years or my life just studying and ‘hanging around’ – I wanted money and independence to move out of my parent’s house.
So, without much fanfare, I revived my IT knowledge: upgraded my skill from the ‘hobbyist’ world of the Commodore 64 and Amiga scene (I had been passionate about modems and the BBS world then), looked at the PC world of the time, rode the ‘Internet wave’ and applied for a simple job at an IT company.
A lot of my friends were either not even searching for a job, with the excuse that there weren’t any, or spending time in university, in a time of change, where all the university-level jobs were taken anyway so that would have meant waiting even more after they had finished studying… I am not even sure they realized this until much later. But I just applied, played my cards, and got my job.
When I went to sign it, they also reminded me they expected hard work at the simplest and humblest level: I would have to fix PC’s, printers, help users with networking issues and tasks like those – at a customer of theirs, a big company. I was ready to roll up my sleeves and help that IT department however I would be capable of, and I did. It all grew from there.
And that’s how my IT career started. I learned all I know of IT on the job and by working my ass off and studying extra hours and watching older/more expert colleagues and making experience.
I am not an engineer. I am, at most, a mechanic. I did learn a lot of companies and the market, languages, designs, politics, the human and technical factors in software engineering and the IT marketplace/worlds, over the course of the past 18 years.
But when I started, I was just trying to lend a honest hand, to get paid some money in return – isn’t that what work was about?
Over time IT got out of control. Like Venom, in the Marvel comics, that made its appearance as a costume that SpiderMan started wearing… and it slowly took over, as the ‘costume’ was in reality some sort of alien symbiotic organism (like a pest).
You might be wondering what I mean. From the outside I was a successful Senior Program Manager of a ‘hot’ Microsoft product. Someone must have mistaken my diligence and hard work for ‘talent’ or ‘desire of career’ – but it never was. I got pushed up, taught to never turn down ‘opportunities’.
First and foremost, I am taking time for myself and my family. I am reading (and writing) I am cooking again I have been catching up on sleep – and have dreams again I am helping my father in law to build a shed in his yard We bought a 14-years old Volkswagen van that we are turning into a Camper I have not stopped building guitars – in fact I am getting setup to do it ‘seriously’ – so I am also standing up a separate site to promote that activity I am making music and discovering new music and instruments I am meeting new people and new situations
There’s a lot of folks out there who either think I am crazy (they might be right, but I am happy this way), or think this is some sort of lateral move – I am not searching for another IT job, thanks. Stop the noise on LinkedIn please: I don’t fit in your algorithms, I just made you believe I did, all these years.
Lately this blog has been very personal. This post is about stuff I do at work, so if you are not one of my IT readers, don’t worry.
For my IT readers, an interruptions from guitars and music on this blog to share some personal reflection on OpInsights and SCOM.
SCOM is very powerful. You know I have always been a huge fan of 2007 and worked myself on the 2012 release. But, compared to its predecessor – MOM – in SCOM it has always been very hard to author management packs – multiple tools, a lot of documentation… here we are, more than 6 years later, and the first 2 comments on an old post on the momteam blog still strike me hard every time I read it:
You would think that things have changed, but SCOM is fundamentally complex, and even with the advances in tooling (VSAE, MPAuthor, etc) writing MPs is still black magic, if you ask some users.
Well, writing those alerting rules in SCOM needs a lot of complex XML – you might not need to know how to write it (but you often have to attempt dechipering it) and even if you create rules with a wizard, it will produce a lot of complex XML for you.
In the screenshot below, the large XML chunk that is needed to pick up a specific eventId from a specific log and a specific source: the key/important information is only a small fraction of it, while the rest is ‘packaging’:
I want OpInsights to be SIMPLE.
If there is onething I want the most for this project, is this.
That’s why the same rule can now be expressed with a simple filter search in OpInsights, where all you need is just that key information
and you essentially don’t have to care about any sort of packaging nor mess with XML.
Click, click – filters/facets in the UI let you refine your criteria. And your saved searches too. And they execute right away, there is not even a ‘Done’ button to press. You might just be watching those searches pinned to tiles in your dashboard. All it took was identify the three key pieces of info, no complex XML wrapping needed!
Ok, granted – there ARE legitimate, more complex, scenarios for which you need complex data sources/collectors and specialized/well thought data shaping, not just events – and we use those powerful capabilities of the MMA agent in intelligence packs. But at its core, the simple search language and explor-ability of the data are meant to bring back SIMPLE to the modern monitoring world. Help us prioritize what data sources you need first!
PS – if you have no idea what I was talking about – thanks for making it till here, but don’t worry: either you are not an IT person, which means simply ignore this; or – if you are an IT person – go check out Azure Operational Insights!
This one you are reading now, instead, is more “philosophical” than technical – I think that, going forward, I’ll keep more of this distinction by posting my rants here on my personal blog, as they are only partially related to the products and more about my point of view on things. The reasons explained below are just those that I perceive and what drives me – I don’t mean in any way to be speaking on behalf of my company, our strategists or product planners.
I have heard statements from customers such as “AVIcode is a developer tool” or “APM is for QA/Test environments – if you need it in production you have not done your QA work well”and similar statements. People asked why we did bring together the two, for example, on the TechNet forums. Sure, it can be useful to employ such a tool also in a development and QA/test environment… but why not in production? With frequent deployments that the agile business demands, change control alone can’t slow down the business and sometimes bad things happen anyway – so we need solid monitoring to keep an eye on the behavior and performance on the system, exposed in a way that can quickly pinpoint where issues might be – be them in the infrastructure or in the code – in a way that enables people to efficiently triage and resolve them. Sergey points out how APM in OpsMgr 2012 is much easier to setup, simpler to configure and cheaper to maintain than the standalone AVIcode product ever was, and hints at the fact that a comprehensive solution encompassing both “traditional” systems management approach as well as Application Performance Monitoring is a good one. It is a good one, in its simplest form, because we have a simplified, unified and more cost-effective infrastructure. It is a good one – I add – because we can extract a lot of useful information from within the applications, only when those are running; when they are down altogether, APM is not very useful on its own, when it is not complemented by “traditional” OS and platform checks: before I wonder if my application is slow, I’d better ask “is IIS actually up and running? is my application running at all?”. Operations Manager has been historically very good, with its management packs, in answering those questions. APM adds the deep application perspective to it, to provide rich data that Developers and Operations need to have an overall picture of what is going on in their systems and applications.
[…] On most projects I’ve worked on, the project team is split into developers, testers, release managers and sysadmins working in separate silos. From a process perspective this is dreadfully wasteful. It can also lead to a ‘lob it over the wall’ philosophy – problems are passed between business analysts, developers, QA specialists and sysadmins […] The Devops movement is built around a group of people who believe that the application of a combination of appropriate technology and attitude can revolutionize the world of software development and delivery […] these people understand the key point – we’re all on the same side! All of us – developers, testers, managers, DBAs, network technicians, and sysadmins – are all trying to achieve the same thing: the delivery of great quality, reliable software that delivers business benefit to those who commissioned it. […]
[…] The DevOps movement is a modern push from the software industry to instill better interaction and productivity between development (Dev) and IT operations (Ops). Instead of throwing applications “over the fence” blindly to operations, a fluid and much more effective DevOps process inserts transparency, efficiency and ownership into the art of developing, releasing and the production use of critical applications. It also binds the two traditionally siloed teams together. […]
When it comes to the DevOps ideas and concepts within Microsoft products, for what I can see, some customers really “get“ it, and would like to see more in this sense. For example I found this interesting blog post by James Dawson:
[…] The bulk of my work revolves around the Microsoft platform and to put it bluntly it is very much a second class citizen in terms of the available tooling.
Now I’ve fanned the flames, let me put some context around that. I don’t mean that as a criticism, in fact I view the status quo as an entirely natural result given where the movement grew out of and, to be frank, the mindset of the typical Microsoft IT shop. In a Microsoft environment there tends to be far greater reliance on big vendor products, whereas in the Linux/BSD world it is far more common to integrate a series of discrete tools into a complete tool chain that meets the needs for a given scenario. […]
I think James is right when saying this: he “gets” it, but we also have a vast user base of more “traditional” enterprise customers where the concepts have not been digested and understood yet. When it comes to traditional enterprises, what sometimes happens is well explained in this other article by Paul Krill:
[…] To protect the infrastructure, IT ops can put in place processes that seem almost draconian, causing developers to complain that these processes slow them down, says Glenn O’Donnell, an analyst at Forrester Research. Indeed, processes such as ITIL (IT Infrastructure Library) that provide a standardized way of doing things, such as handling change management, can become twisted into bureaucracy for its own sake. But sometimes, people "take a good idea too far, and that happens with ITIL, too." […]
And I think that is exactly one of the reasons why, even if many of our teams “get” it, we need to talk more of the DevOps culture in those places where it hasn’t arrived yet, so that these integrated products are more successful and can help them solve problems – because some of these customers haven’t yet realized that it takes a culture shift before these new tools can be adopted. DevOps does not have critical mass today, but could have it tomorrow. Even Gartner says:
[…] by 2015, DevOps will evolve from a niche strategy employed by large cloud providers into a mainstream strategy employed by 20% of the Global 2000 organizations”. […]
So, back to suggesting that Microsoft produces more of this “goodness”, James again writes:
[…] I want to see the values espoused by DevOps spread far and wide, including the quietest backwaters of corporate IT, where Windows, Office and IE 6 reign supreme. To that end, the Microsoft infrastructure community needs to take a similar approach as the .NET community did and start bringing some of the goodness that we see in the Linux world to the Microsoft platform in a way that facilitates adoption for all and actually takes advantage of the platform’s innate richness and strengths. […]
So do I. And, for what I can tell, we are actually trying to bridge gaps and push the culture shift – integrating APM in OpsMgr is definitely an effort in this direction. But it might take some time. Is it too an “utopian” a vision? I don’t think it is; I think we can get there. But it will take some time. As this other article was saying:
[…] The DevOps approach is so radical it will take some time to cross the chasm, and indeed it will be actively resisted by many organizations where it threatens traditional delivery models and organizational structures. […]
Let’s get Dev and Ops talking to each other, also in the Enteprise! I am all for it.
The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. If you want to see the official info from my employer about the topic above, go to http://www.microsoft.com/presspass/presskits/cloud/default.aspx
Just some shameless personal plug here, pointing out that I recently wrote two technical posts on the momteam blog about the APM feature in Operations Manager 2012 – maybe you want to check them out:
APM object model – describes the object model that gets created by the APM Template/Wizard when you configure .NET application monitoring
Custom APM Rules for Granular Alerting – explains how you can leverage management pack authoring techniques to create alerting rules with super-granular criteria’s (building beyond what the GUI would let you do)
Hope you find them useful – if you are one of my “OpsMgr readers”
This is the first public release since I am part of the team (I started in this role the day after the team had shipped Beta) and this is the first release that contains some direct output of my work. It feels so good!