I have not posted in a while. New year resolutions? Big plans? Nothing special, really – other than Service Pack 1 of System Center 2012 has finally shipped (this is, in case you are one of my IT-related readers) and I got featured in a video about some of the great things that are in it.
For the rest of the folks who are not working in IT and stumbled here as friends or simply curious, then the rest of this post is for you. During the Christmas holidays I really spent some time on a fun personal project: I bought a cheap Stratocaster copy in a thrift store, and tried to bring it back to life (and was fairly successful in doing so). When I got it, it was full of stickers, scratches, dirt… basically, had been used to play Punk, I assume. Nothing wrong with punk, of course (heck, I have even had a mohawk hairstyle at one point in my teen years!), but it's just that I like to care for instruments better (I never even understood the whole guitar-smashing thing either, for that matter… regardless if Jimi Hendrix did it, it is still a cruelty and a nonsense).
And yeah, I had understood that the model of guitar ("Slammer by Hamer") was a cheap mass-production guitar series, but I figured that since I had never done any restoration, it would be worth to try on something cheap, first, before ruining something expensive. In the end it turned out a lot better than even I expected in the beginning.
So I started by unscrewing everything and collected all the pieces, screws, electrical pieces in plastic boxes (a lot of those cleaned separately with a variety of cleaning liquids, canned air, contact sprays, etc – but I didn't take pictures of that phase). So, with the "naked" body I started removing the stickers.
Then I used a chemical paint stripper. No matter how much I put on and how long I waited, and scrubbed… it really only helped removing the glossy varnish, not the paint itself, after 2 days. Still it made the paint a lot thinner.
After cleaning up the glossy varnish and the stripper, I attacked the body with sandpaper. Almost 4 hours of sanding with elbow grease (I didn't have a sanding machine – I just did it by hand with a sanding block) to get it back to clean wood…
Then I started filling up some dents in the body and some holes in the headstock: the headstock is actually in pretty decent condition so I didn't do anything to its wood, but a few screws that were supposed to hold the tuning pegs were loose, and dirt was accumulated inside the tuning pegs, so I removed those, cleaned them up, lubricated them, etc.
While waiting for the first paint to dry, I cleaned up the electrical pieces.
I had roughly tested those earlier (with a screwdriver) and they appear to be functioning (at least, electricity is running thru them – but I am not sure of the quality). Anybody knows what kind/model of GFS pickups these are? They have black/dark blue wiring on the coil… probably some of those on this page guitarfetish.com/Neovin-White-Pickups_c_143.html but not sure which ones.
Anyway, this was a pleasant surprise, and I think they had been replaced earlier – and, albeit the way I found them mounted on the plastic board/cover was horrible (some of the screws were longer than they were supposed to be, which lifted the plastic of the cover and deformed it), it at least meant that the thing would not have sounded too badly (there are reports all over the internet about how bad/noisy the stock pickups in this guitar series is sounding).
Anyway, I planned on using what was there at first, hear how it sounds, and then if I am not satisfied I can always replace those later on. They turned out to be not bad, in the end, but I only found out several days later. At the time, during the restoration, I just gave everything a good clean and used a contact cleaner spray in abundant dosage, and re-assembled the scratchplate.
Then, as the paint was getting more dry, I started wondering why this body was completely hollow, having the chamber of the pickups directly open towards the chamber on the back, which holds the tremolo system and springs – I imagine they would interfere with each other a lot and cause background noise. Therefore I have carved a small piece of scrapwood and glued it to the body to close and separate those cavities again. I will also be electrically shielding the chamber with the pickups later on.
After many coats of stain+wet sanding, I then passed many coats of laquer/finish and did even more more wet sanding! I have been busy with this process for several days. And man, this thing stinks – do it outdoor if you don’t want to die by intoxication:
Then, after a lot of coats of finish and a lot of wet sanding, I have used this product (which is used for cars!) that helps bring out the shine of the paint. After three passes of this product, the day after I wiped the surface and then applied wax to protect the wood.
Time to start assembling the pieces back together! First, I screwed the neck back in place and adjusted the truss rod position (not exactly, yet, but roughly – more adjustment later when strings need to go in place, as part of intonation…).
Installed the springs and the whole tremolo system. The guitar had no cover for this but I bought a replacement one. Once the ground cable was also passed in and soldiered (later on), this cavity has been closed.
Soldered the cables for the ground and the jack, and installed the electrical parts back in. I will be eventually be taking all these off again, at some point in the future, because I am planning to electrically shield/isolate the pickups cavity/room – but since the copper foil that I have ordered has not arrived yet, I'll give it a first try without shielding – to see if and how much noise these pickups make. So I will be able to compare later on the "before" and "after" the shielding. And this is how it looks, completely assembled (just missing strings at this point):
Quite a difference from how I found it! It almost looks like it’s worth something, now
And you know what? After I have been playing it for a couple of weeks, I also like the way it sounds – those GFS pickups aren't bad at all, with a lot of different tone variety, and not noisy as I was expecting them to be. All in all, I was very pleased with the result of this project!
Now, onto the next challenge – I want to build one from scratch! That will be another post, if I actually get to do it
Another song I put together recently, starting from the base chords used in a Jam session with a friend (and doing quite a bit of re-arrangement afterwards).
It has been a couple of months since we released the CTP2 release (I had blogged about that here http://www.muscetta.com/2012/06/16/operations-manager-2012-sp1-ctp2-is-out-and-my-teched-na-talk-mgt302/ ) and we have now reached the Beta milestone!
Albeit you might have already seen a number of posts about this last week (i.e. http://blogs.technet.com/b/server-cloud/archive/2012/09/10/system-center-2012-sp1-beta-available-evaluate-with-windows-server-2012.aspx or http://blogs.technet.com/b/momteam/archive/2012/09/11/system-center-2012-service-pack-1-beta-now-available-for-download.aspx), I see the information on the blogs so far didn’t quite explain all the various new features that went into it, and I want to give a better summary specifically about the component that I work on: Operations Manager.
Keep in mind the below is just my personal summary – the official one is here http://technet.microsoft.com/en-us/library/jj656650.aspx – and it actually does explain these things… but since some OpsMgr community reads a lot of blogs, I wanted to highlight some points of this release.
- Support for installing the product on Windows Server 2012 for all components: agent, server, databases, etc.
- Support for using SQL Server 2012 to host the databases
- Global Service Monitor – This is actually something that Beta version enables, but the required MPs don’t currently ship with the Beta download directly – you will be able to sign up for the Beta of GSM here. Once you have registered and imported the new MPs, you will be able to use our cloud based capability to monitor the health of your web applications from geo-distributed perspective that Microsoft manages and runs on Windows Azure, just like you would from your own agent/watcher nodes. Think of it as an extension of your network, or “watcher nodes in the cloud”
this is my area and what myself and the team I am in specifically works on – so I personally had the privilege to drive some of this work (not all – some other PMs drove some of this too!)
- Support for IIS8 with APM (.NET application performance monitoring) – this enables APM to monitor applications running on Windows Server 2012, not just 2008 anymore. The new Windows Server 2012 and IIS8 Management packs are required for this to work. Please note that, if you have imported the previous, “Beta” Windows 8 Management packs, they will need to be removed prior to installing the official Windows Server 2012 Management Packs. About Windows Server 2012 support and MPs, read more here http://blogs.technet.com/b/momteam/archive/2012/09/05/windows-server-2012-system-center-operations-manager-support.aspx
- Monitoring of WCF, ASP.NET MVC and .NET NT services – we made changes to the agent so that we better understand and present data related to calls to WCF Services, we support monitoring of ASP.NET MVC applications, and we enabled monitoring of Windows Services that are built on the .NET framework – the APM documentation here is updated in regards to these changes and refers to both 2012 RTM and SP1 (pointing out the differences, when needed) http://technet.microsoft.com/en-us/library/hh457578.aspx
- Introduction of Azure SDK support – this means you can monitor applications that make use of Azure Storage with APM, and the agent is now aware of Azure tables, blobs, queues as SQL Azure calls. It essentially means that APM events will tell you things like “your app was slow when copying that azure blob” or “you got an access denied when writing to that table”
- 360 .NET Application Monitoring Dashboards – this brings together different perspectives of application health in one place: it displays information from Global Service Monitor, .NET Application Performance Monitoring, and Web Application Availability Monitoring to provide a summary of health and key metrics for 3-tier applications in a single view. Documentation here http://technet.microsoft.com/en-us/library/jj614613.aspx
- Monitoring of SharePoint 2010 with APM (.NET application performance monitoring) – this was a very common ask from the customers and field, and some folks were trying to come up with manual configurations to enable it (i.e. http://blogs.technet.com/b/shawngibbs/archive/2012/03/01/system-center-2012-operation-manager-apm.aspx ) but now this comes out of the box and it is, in fact, better than what you could configure: we had to change some of the agent code, not just configuration, to deal with some intricacies of Sharepoint…
- Integration with Team Foundation Server 2010 and Team Foundation Server 2012 – functionality has also been enhanced in comparison to the previous TFS Synchronization management pack (which was shipped out of band, now it is part of Operations Manager). It allows Operations teams to forward APM alerts ( http://blogs.technet.com/b/momteam/archive/2012/01/23/custom-apm-rules-for-granular-alerting.aspx ) to Developers in the form of TFS Work Items, for things that operations teams might not be able to address (i.e. exceptions or performance events that could require fixes/code changes)
- Conversion of Application Performance Monitoring events to IntelliTrace format – this enables developers to get information about exceptions from their applications in a format that can be natively used in Visual Studio. Documentation for this feature is not yet available, and it will likely appear as we approach the final release of the Service Pack 1. This is another great integration point between Operations and Development teams and tools, contributing to our DevOps story (my personal take on which was the subject of an earlier post of mine: http://www.muscetta.com/2012/02/05/apm-in-opsmgr-2012-for-dev-and-for-ops/)
- Support for monitoring of CentOS, Debian, and Ubuntu Linux – this is really one of my favorites: you might remember that I have been pioneering the CentOS and Debian/Ubuntu monitoring from OpsMgr a few years ago, albeit in totally unsupported fashions – here http://www.muscetta.com/2008/11/23/centos-discovery-in-opsmgr2007-r2-beta/ and here http://www.muscetta.com/2009/05/30/installing-the-opsmgr-2007-r2-scx-agent-on-ubuntu/ . Similarly, Robert (who now works on System Center Orchestrator) had blogged some clearer step-by-step (in three parts http://blogs.msdn.com/b/scxplat/archive/2010/01/18/building-a-centos-management-pack-part-3.aspx ) on how to build such a management pack… and finally, official support is coming: read more about it from Kristopher (who is the PM driving this) here http://operatingquadrant.com/2012/09/14/sc2012-sp1-beta-operations-manager-adds-support-for-additional-linux-distros/ . Did I already say this is awesome?
- Improved Heartbeat monitoring for Unix/Linux - Heartbeat monitors for Operations Manager UNIX and Linux agents now support configurable “MissedHeartbeats” – allowing for a defined number of failed heartbeats to occur before generating an alert.
Audit Collection Services
- Support for Dynamic Access Control in Windows Server 2012 – When was the last time that an update to ACS was made? Seems like a long time ago to me…. Windows Server 2012 enhances the existing Windows ACL model to support Dynamic Access Control. System Center 2012 Service Pack 1 (SP1) contributes to the fulfilling these scenarios by providing enterprise-wide visibility into the use of the Dynamic Access Control.
- Additional network devices models supported – new models have been tested and added to the supported list
- Visibility into virtual network switches in vicinity dashboard – this requires integration with Virtual Machine Manager to discover the network switches exposed by the hypervisor
- Production use is NOT supported for customers who are not part of the TAP program
- Upgrade from CTP2 to Beta is NOT Supported
- Upgrade from 2012 RTM to SP1 Beta will ONLY be supported for customers participating in the TAP Program
- Procedures not covered in the documentation might not work
Sure, not the most original name… maybe you can suggest one
This one is featuring myself playing a few different guitar parts and even the electric violin I just recently bought.
As you might have already heard, this has been an amazing week at TechEd North America: System Center 2012 has been voted as the Best Microsoft Product at TechEd, and we have released the Community Technology Preview (CTP2) of all System Center 2012 SP1 components.
I wrote a (quick) list of the changes in Operations Manager CTP2 in this other blog post and many of those are related to APM (formerly AVIcode technology). I have also demoed some of these changes in my session on thursday – you can watch the recording here. I think one of the most-awaited change is support for monitoring Windows Services written in .NET – but there is more than that!
In the talk I also covered a bit of Java monitoring (which is the same as in 2012, no changes in SP1) and my colleague Åke Pettersson talked about Synthetic Transactions, and how to bring all together (synthetic and APM) in a single new dashboard (also shipping in SP1 CTP2) that gives you a 360 degrees view of your applications. The CTP2 documentation covers both the changes to APM as well as how to light up this new dashboard.
When it comes to synthetics – I know you have been using them from your own agents/watcher nodes – but to have a complete picture from the outside in (or last mile), we have now also announced the Beta of Global Service Monitoring (it was even featured in the Keynote!) – where essentially we extend your OpsMgr infrastructure to the cloud, and allow you to upload your tests to our Azure-based service and we will run those tests against your Internet-facing applications from our watcher nodes in various datacenters around the globe and feed back the data to your OpsMgr infrastructure, so that you can see how your application is available and responding from those locations. You can sign up for the consumer preview of GSM from the connect site.
Enjoy your beta testing! (Isn’t that what weekends are for, geeks?)
So, after having been hectic with the move and adapting to a new country/job/life, I finally managed to reconnect and reconfigure my musical equipment and play some music again. I have a few tracks I have been composing… this one is remake of something I had written many years ago (almost 15, in fact). To say the truth, it really has very little music in common with the original one (whose score/files have been long lost) but it bears the same overall "atmosphere" to me, and I reused (part of) the original lyrics.
I hope you like it.
This was my granddad's typewriter – a very heavy Olivetti Editor – that I used to observe with great interest (almost fascination) when I was a kid. My granddad used to write official letters on it and do some administration work in his not-so-late years but after he went with pension. When I was a little kid, it was some sort of "sacred" device we had at home, belonging to the grown-up, serious world – nothing to play with, covered with austerity. It was easy to get the paper jammed in it, the ink ribbon tangled up, the letterheads stuck, if not used with care.
And yet I was granted the privilege to use it, as my granddad had a lot of patience with me – and he let me learn to type on it, years before home and personal computers began to be readily available to us: I remember him helping me out to "publish" my "books" (like: unique copies of two/three pages fantasy stories I had invented myself when I was about 7 or 8 years old). Those don't even exist anymore, if not in my memory.
When my grandpa and grandma died, my mum and her brother started looking at their things – had to see the house they were living in, kept some objects, sold others, donated other ones… as it happens in those situations.
Nobody really wanted this, and it is a pretty useless piece of technology in these days of smartphones and tablet and devices… but I kept it for a while, until we relocated to the USA, at least (and I would not even know where to keep it today)…
With it, a piece of my history was finished off and it left me with spinning thoughts in my mind, like those you get after finishing a book or a good movie that made you think… and you are not quite sure that story really is finished.
I recently wrote a couple of technical posts about the object model we have chosen for APM in OpsMgr 2012 and how to author granular alerting rules for APM in XML. That’s more the type of post that pertains on the momteam blog.
This one you are reading now, instead, is more “philosophical” than technical – I think that, going forward, I’ll keep more of this distinction by posting my rants here on my personal blog, as they are only partially related to the products and more about my point of view on things. The reasons explained below are just those that I perceive and what drives me – I don’t mean in any way to be speaking on behalf of my company, our strategists or product planners.
I have heard statements from customers such as “AVIcode is a developer tool” or “APM is for QA/Test environments – if you need it in production you have not done your QA work well”and similar statements. People asked why we did bring together the two, for example, on the TechNet forums. Sure, it can be useful to employ such a tool also in a development and QA/test environment… but why not in production? With frequent deployments that the agile business demands, change control alone can’t slow down the business and sometimes bad things happen anyway – so we need solid monitoring to keep an eye on the behavior and performance on the system, exposed in a way that can quickly pinpoint where issues might be – be them in the infrastructure or in the code – in a way that enables people to efficiently triage and resolve them. Sergey points out how APM in OpsMgr 2012 is much easier to setup, simpler to configure and cheaper to maintain than the standalone AVIcode product ever was, and hints at the fact that a comprehensive solution encompassing both “traditional” systems management approach as well as Application Performance Monitoring is a good one. It is a good one, in its simplest form, because we have a simplified, unified and more cost-effective infrastructure. It is a good one – I add – because we can extract a lot of useful information from within the applications, only when those are running; when they are down altogether, APM is not very useful on its own, when it is not complemented by “traditional” OS and platform checks: before I wonder if my application is slow, I’d better ask “is IIS actually up and running? is my application running at all?”. Operations Manager has been historically very good, with its management packs, in answering those questions. APM adds the deep application perspective to it, to provide rich data that Developers and Operations need to have an overall picture of what is going on in their systems and applications.
In my opinion, in this world of continuous services improvement and cloud services, IT management is tearing down the walls between what traditionally has been two separate worlds of “Operations” (Ops) teams and Development (Dev) teams. So, while people ask why we brought what was more of a Developer tool into a pure System Management tool, it is clear to me that those areas are converging, and even other vendors who start from the opposite approach (APM) eventually go “back to the basics” and begin implementing server-level systems management such as showing disk space and CPU utilization, meaning that, whatever your starting point was or has been, everybody wants and feels the need to bring those two worlds and disciplines together.
This line of thoughts has even been given a name: “DevOps”.
What is this DevOps things anyway is one famous post that can be found on the web, where Stephen Nelson-Smith writes:
[…] On most projects I’ve worked on, the project team is split into developers, testers, release managers and sysadmins working in separate silos. From a process perspective this is dreadfully wasteful. It can also lead to a 'lob it over the wall' philosophy – problems are passed between business analysts, developers, QA specialists and sysadmins […] The Devops movement is built around a group of people who believe that the application of a combination of appropriate technology and attitude can revolutionize the world of software development and delivery […] these people understand the key point – we’re all on the same side! All of us – developers, testers, managers, DBAs, network technicians, and sysadmins – are all trying to achieve the same thing: the delivery of great quality, reliable software that delivers business benefit to those who commissioned it. […]
DevOps – the war is over if you want it is a presentation by Patrick Debois which I also encourage you to check out, as it is also very evocative thru images:
[…] The DevOps movement is a modern push from the software industry to instill better interaction and productivity between development (Dev) and IT operations (Ops). Instead of throwing applications “over the fence” blindly to operations, a fluid and much more effective DevOps process inserts transparency, efficiency and ownership into the art of developing, releasing and the production use of critical applications. It also binds the two traditionally siloed teams together. […]
Last but not least, 10+ Deploys Per Day: Dev and Ops Cooperation at Flickr (another presentation from a conference) is a real-world example of a large scale web site (Flickr) and how those practices are adopted.
When it comes to the DevOps ideas and concepts within Microsoft products, for what I can see, some customers really “get“ it, and would like to see more in this sense. For example I found this interesting blog post by James Dawson:
[…] The bulk of my work revolves around the Microsoft platform and to put it bluntly it is very much a second class citizen in terms of the available tooling.
Now I’ve fanned the flames, let me put some context around that. I don’t mean that as a criticism, in fact I view the status quo as an entirely natural result given where the movement grew out of and, to be frank, the mindset of the typical Microsoft IT shop. In a Microsoft environment there tends to be far greater reliance on big vendor products, whereas in the Linux/BSD world it is far more common to integrate a series of discrete tools into a complete tool chain that meets the needs for a given scenario. […]
I think James is right when saying this: he “gets” it, but we also have a vast user base of more “traditional” enterprise customers where the concepts have not been digested and understood yet. When it comes to traditional enterprises, what sometimes happens is well explained in this other article by Paul Krill:
[…] To protect the infrastructure, IT ops can put in place processes that seem almost draconian, causing developers to complain that these processes slow them down, says Glenn O'Donnell, an analyst at Forrester Research. Indeed, processes such as ITIL (IT Infrastructure Library) that provide a standardized way of doing things, such as handling change management, can become twisted into bureaucracy for its own sake. But sometimes, people "take a good idea too far, and that happens with ITIL, too." […]
And I think that is exactly one of the reasons why, even if many of our teams “get” it, we need to talk more of the DevOps culture in those places where it hasn’t arrived yet, so that these integrated products are more successful and can help them solve problems – because some of these customers haven’t yet realized that it takes a culture shift before these new tools can be adopted. DevOps does not have critical mass today, but could have it tomorrow. Even Gartner says:
[…] by 2015, DevOps will evolve from a niche strategy employed by large cloud providers into a mainstream strategy employed by 20% of the Global 2000 organizations”. […]
So, back to suggesting that Microsoft produces more of this “goodness”, James again writes:
[…] I want to see the values espoused by DevOps spread far and wide, including the quietest backwaters of corporate IT, where Windows, Office and IE 6 reign supreme. To that end, the Microsoft infrastructure community needs to take a similar approach as the .NET community did and start bringing some of the goodness that we see in the Linux world to the Microsoft platform in a way that facilitates adoption for all and actually takes advantage of the platform’s innate richness and strengths. […]
So do I. And, for what I can tell, we are actually trying to bridge gaps and push the culture shift – integrating APM in OpsMgr is definitely an effort in this direction. But it might take some time. Is it too an “utopian” a vision? I don’t think it is; I think we can get there. But it will take some time. As this other article was saying:
[…] The DevOps approach is so radical it will take some time to cross the chasm, and indeed it will be actively resisted by many organizations where it threatens traditional delivery models and organizational structures. […]
Let’s get Dev and Ops talking to each other, also in the Enteprise! I am all for it.
The information in this weblog is provided "AS IS" with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided "AS IS" without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I'VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. If you want to see the official info from my employer about the topic above, go to http://www.microsoft.com/presspass/presskits/cloud/default.aspx