I just saw that my former colleague (PFE) Tristan has posted an interesting note about the use of SetSPN “–A” vs SetSPN “–S”. I normally don’t repost other people’s content, but I thought this would be useful as there are a few SPN used in OpsMgr and it is not always easy to get them all right… and you can find a few tricks I was not aware of, by reading his post.
Now that this promise has been mantained, and the SCX providers have been released on Codeplex at http://xplatproviders.codeplex.com/ it should be finally possible to entirely build your own unsupported agent package, starting from source code, without having to modify the original package as I have shown earlier on this blog. Of course this will still be unsupported by Microsoft Product support, but will eventually work just fine! This is an extraordinary event in my opinion, as it is not a common event that Microsoft releases code as open source, especially when this is part of one of the product it sells. I suspect we will see more of this as we going forward.
Anyway, I have in the past posted a number of posts on my blog under this tag http://www.muscetta.com/tag/xplat/ (I will continue to use that tag going forward) which show/describe how I hacked/modified both the existing MPs AND the SCX agent package to let it run on unsupported distributions (and I think they are still useful as they show a number of techniques about how to test, understand and troubleshoot the Xplat agent a bit. In fact, I have first learned how to understand and modify the RedHat MPs to monitor CentOS and eventually even modified the RPM package to run on Ubuntu (which also works on Debian 5/Lenny), eventually, as you can see because I am now using it to monitor – from home, across the Internet – the machine running this blog:
After all, those experimentations with Xplat got me a fame of being a “Unix expert at Microsoft” (this expression still makes me laugh), as I was tweeting here:
The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.
Yesterday, when trying to burn an Audio CD (to listen to music in my car) starting from MP3 files by using Windows Media Player 11, I kept getting this message “connect a burner and restart the player” and the “Start Burn” button was greyed out, like if the program was not able to seeing that my CD/DVD Burner is actually capable of writing CDs:
But I knew the DVD/CD burner was connected and working, because I had used it the very same day (with another program) to burn an .ISO image, and it worked from there!
I searched all over the place for this error message, and there are many posts in forums with this message, which suggest you to do the strangest things, from changing your computer, to deleting important pieces of the registry, to reinstall the whole system… most of them are bullshit.
I went to my wife’s PC to test…with her PC it worked. It looked mostly the same: she’s running Vista, not 2008 (but it really is the same kernel, isn’t it?), she has exactly the same DVD burner installed as I do, the same motherboard, both machines and OS’s are 64bit, we both have installed Internet Explorer 8 (and keep it with “protected mode” turned ON), we both have Media Player 11, we both keep UAC enabled…
But then in the end I tried using elevation:
And here we go, it worked:
When running the process as administrator, Windows Media Player is able to query the hardware to determine if we have a capable device on Windows Server 2008. It remains a mystery to me at this point why this works on my wife’s Vista machine without elevation, though…
It sure is not a problem to do this operation “as administrator” when needed – but it just took me a minute to figure it out, for some reason.
So this is a screenshot from my new Quad-Core Intel Q6600, 8GB RAM with Windows 2008 Enterprise x64, running Hyper-V. I have bought it and installed it a few days ago, and migrated my home Active Directory off the old windows 2003 machine to Windows 2008. Yes, because I have an Active Directory at home. I know, I am probably nuts, but you already knew that much.
Today, I just updated Hyper-V to RTM version. Oh yeah, because Hyper-V has been Released To Manufacturing today! You can get it HERE.
I am having lot of fun with this. I had not bought a new PC in about 7 years and could not really test anything on that old one anymore… I paid 8GB roughly 100euros, which is not a lot if you think about it. These days even standard “budget” PCs for just doing email and web surfing ship with 2 or 4GB… With that amount of RAM, I expect it to last several years like the previous one. The one I bought 7 years ago had 512MB when everybody was buying 128 or 256MB. Kinda the same story here.
Wonder what happens to the old PC? That glorious machine that has been my server for years has now been converted to the new kids’ PC and will go on for a few more years like that, I hope.
I am testing the beta bits of the cross-platform extensions that were released on Microsoft Connect
This post wants to describe my limited testing so far – I hope this can benefit/help everyone testing the beta for some stuff that might currently not be incredibly clear – unless you attended the MMS class, at least :-))
I started out with the White Paper that has been posted on the web, which describes the architecture pretty well, but from a higher level (with diagrams and the like). Then I downloaded the beta bits, which contain another document about setting the thing up. It is pretty well done, to be honest (especially if you consider that it is beta documentation for a beta product!), but it does not really go all the way down to troubleshooting things a lot, yet. I will try to cover some of that here.
I installed the agent manually – it’s just a RPM package, not much that can go wrong with that. There is a reason why I did not use the push discovery and deployment of the agent, which you will figure out reading later on. Once installed, I tried to figure out how things were looking like on the linux machine. It is all pretty understandable, after all, if you look around on the machine (documented or not, linux and open source stuff is easy to figure out by reading configuration files and the like, and by searching on the web).
Basically the “agent” is not properly an “agent” the way the windows agent is, since it does not really “sends” stuff to the Management Server on its own: It consists of a couple of services/daemons, based on existing opensource projects, but configured in their own folder, with their own name, and using different ports than a standard install of those, not to conflict with possible existing ones on those machines.
The Management Service uses these services remotely (similar to doing agentless monitoring towards a windows box) using these services. The two services are:
I still have to delve into them properly as I would like to, but I already figured out a bunch of interesting things by quickly looking at them.
Agent Communication someone must have decided to “recycle” the 1270 port number that was used in MOM2005 🙂 Basically openwsman listens as a SSL listener (with basic auth – connected via PAM module with the “regular” unix /etc/passwd users, so you can authenticate as those without having to define specific users for the service). So all that happens is that the Management Server asks things/executes WS-Man queries and commands on this channel. The Management Server connects every time to the agent on port 1270 using SSL, authenticates as “root” (or as the specified “Action Account”) and does its stuff, or asks the agent to do it. So the communication is happening from the Management Server to the agent… not the other way around like it happens with Windows “agents”. That’s why it feels to me more like an “agentless” thing, at least for what concerns the “direction” of traffic and who does the actual querying.
For the rest, the provided Management Packs have “normal” discoveries and “normal” monitors. Pretty much like the Windows Management Packs often discover thing by querying WMI, here they use WS-Man to run CIM queries against the Unix boxes.
The Service Model is totally cool to actually *SEE* in action, don’t you think so ?
A few more debugging/troubleshooting information:
I searched a bit and found the openwsman.org documentation and forum to be useful to figure some things out. For example I banged my head a few times before managing to actually TEST a query from windows to linux using WINRM. This document helped a lot.
Of course you have to solve some other things such as DNS resolution AND trusting the self-issued certificates that the agent uses, first. Once you have done that, you can run test queries from the Windows box towards the Unix ones by using WinRM.
For example, this is how I tested what the discovery for a Linux RedHat Computer type should be returning (I read that by opening the MP in authoring console, as one would usually do for any MP):
If you need to test the query directly *ON* the linux box (querying the CIMD instead than WSMAND), the WBEMEXEC utility is packaged with the agent (under /opt/microsoft/scx/bin/tools ). It is not as easy as some windows administrators (that have used WBEMTEST or WMI Tools in the past) would hope, but not even that bad. Just to run a few queries to the CIM daemon locally it is not really interactive, so you need to create a XML file that looks like the following (basically you build the RAW request the way the CIMD accepts it):
<?xml version=”1.0″ ?>
<CIM CIMVERSION=”2.0″ DTDVERSION=”2.0″>
<MESSAGE ID=”50000″ PROTOCOLVERSION=”1.0″>
<SIMPLEREQ>
<IMETHODCALL NAME=”EnumerateInstanceNames”>
<LOCALNAMESPACEPATH>
<NAMESPACE NAME=”root”/>
<NAMESPACE NAME=”scx”/>
</LOCALNAMESPACEPATH>
<IPARAMVALUE NAME=”ClassName”>
<CLASSNAME NAME=”SCX_OperatingSystem”/>
</IPARAMVALUE>
</IMETHODCALL>
</SIMPLEREQ>
</MESSAGE>
</CIM>
Once you have made such a file, you can execute the query in the file with the tool like the following:
As you can see from here, CIMD uses HTTP already. This differs from Windows’ WMI that uses RPC/DCOM. In a way, this is much simpler to troubleshoot, and more firewall-friendly.
I have not really found an activity or debug log for any of those components, yet… but in the end they are not doing anything ON THEIR OWN, unless asked by the MS…. So the “healthservice” logic is all on the MS anyway. Errors about failed discoveries, permissions of the Action Account user, and anything else will be logged by the HealthService on the Windows machine (the Management Server) that is actually performing monitoring towards the Unix box.
It really is *just* getting the WMI and WinRM-equivalent layer on linux/Unix up and running– after that, everything is done from windows anyway!
After this common management infrastructure has been provided, 3rd parties will be facilitated in writing *just* MPs, without having to worry about the TRANSPORT of information anymore.
As you have probably noticed from the screenshots and commandlines, I don’t have a “real” Redhat Enterprise Linux or “supported” linux distribution… Therefore I started my testing using CentOS 5 (which is very similar to RHEL 5) – the agent installed fine as you can see, but I was not getting anything really “discovered” – the MP had only found a “linux computer” but was not finding any “RedHat” or “SuSe” or any other “Operating System” instances… and if you are somewhat familiar with the way Operations Manager targeting works, you would understand that monitors are targeted at object classes. If I don’t have any instance of those objects being discovered, NO MONITORING actually happens, even if the infrastructure is in place and the pieces are talking to each other:
Therefore my machine was not being monitored.
In the end, I actually even got it to work, but I had to create a new Management Pack (exporting and modifying the RHEL5 one as a base) that would actually search for different Property values and discover CentOS instead as if it were RedHat:
After importing my hacked Management Pack the machine started to be monitored. Here you can see Health Explorer in all of its glory:
Of course this is a hack I made just to have a test setup somewhat working and to familiarize myself with the SCX components. It is not guaranteed that my Management pack actually works on CentOS the way it is supposed to work and that there aren’t other – more subtle – differences between RedHat and CentOS that will make it fail. I only modified a couple of Discoveries to let it discover the “Operating System” instance… everything else should follow, but not necessarily. One difference you see already in the screenshot above is that I am not yet seeing the hardware being monitored, so my hack is already only partially working and it is definitely something that won’t be supported, so I cannot provide it here. Also, this is a beta, so I I think that the Management Packs will be re-released with following beta versions, and this change is something that would need to be re-done all over again. Also, the unsupported distribution is the reason why I installed the agent manually in the first place, as the “Discovery Wizard” would not really “agree” to go and let me install the agent remotely on an unsupported “platform!”.
The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION.
It is interesting to see that a bunch of open source projects written on and for the Microsoft platform grows and grows, and also nice to see that a lot of Microsoft employees are very active and aware of the open source ecosystem, rather than being stuck with only what the company makes. Phil Haack, in a post about an interview to Brad Wilson, wisely writes:
"[…] What I particularly liked about this post was the insight Brad provides on the diverse views of open source outside and inside of Microsoft as well as his own personal experience contributing to many OSS projects. It’s hard for some to believe, but there are developers internal to Microsoft who like and contribute to various open source projects. […]"
"[…] Hey. My name is Ariel and I’m the person you thought would never work at MSFT […]".
In fact, just as I do, she is running that blog on WordPress, posting her photos on Flickr, using a RSS feed on Feedburner and in general using a bunch of things that are out there that might be seen as "competing" with what Microsoft makes. In fact, this attitude towards other products and vendors on the market is what I am mainly interested in. Should we only use flagship products? Sure, when they help us, but not necessarily. Who cares? People’s blogs are not, as someone would like them to be, a coordinated marketing effort. This is about real people, real geeks, who just want to share and communicate personal ideas and thoughts. I had a blog before being at Microsoft, after all. Obviously I had exposure to competing products. My server was running LAMP on Novell Netware in 2002 – after which I moved it to Linux. It is not a big deal. And if I try to put things in perspective, in fact, this is turning out to be an advantage. I am saying this, as the latest news about interoperability comes from MMS (Microsoft Management Summit): and that is the announcement that System Center Operations Manager will monitor Linux natively. I find this to be extremely exciting, and a step in the right direction… to say it all I am LOVING this!!! But at the same time I see some other colleagues in technical support that are worrying and being scared by this – "if we do monitor Linux and Unix, we are supposed to have at least some knowledge on those systems", they are asking. Right. We probably do. At the moment there are probably only a limited number of people that actually can do that, at least in my division. But this is because in the past they must have sacrificed their own curiosity to become "experts" in some very narrow and "specialized" thing. Here we go. On the opposite, I kept using Linux – even when other "old school" employees would call me names. All of a sudden, someone else realizes my advantage. …but a lot of geeks already understood the power of exploration, and won’t stop defining people by easy labels. Another cool quote I read the other day is what Jimmy Schementi has written in his Flickr profile:
"[…] I try to do everything, and sometimes I get lucky and get good at something […]".
Reading on his blog it looks like he also gave up on trying to write a Twitter plugin for MSNLive Messenger (or maybe he never tried, but at least I wanted to do that, instead) and wrote it for Pidgin instead. Why did he do that ? I don’t know, I suppose because it was quicker/easier – and there were API’s and code samples to start from.
The bottom line, for me, is that geeks are interested in figuring out cool things (no matter what language or technology they use) and eventually communicating them. They tend to be pioneers of technologies. They try out new stuff. Open Source development is a lot about agility and "trying out" new things. Another passage of Brad’s interview says:
"[…] That’s true–the open source projects I contribute to tend to be the “by developer, for developer” kind, although I also consume things that are less about development […] Like one tool that I’ve used forever is the GIMP graphics editor, which I love a lot".
That holds true, when you consider that a lot of these things are not really mainstream. Tools made "by developer, for developer" are usually a sort of experimental ground. Like Twitter. Every geek is talking about Twitter these days, but you can’t really say that it is mainstream. Twitter has quite a bunch of interesting aspects, though, and that’s why geeks are on it. Twitter lets me keep up-to-date quicker and better (and with a personal, conversational touch) even better than RSS feeds and blogs do. Also, there are a lot of Microsofties on Twitter. And the cool thing is that yo can really talk to everybody, at any level. Not just everybody "gets" blogs, social networks, and microblogging. Of course you cannot expect everybody to be on top of the tech news, or use experimental technologies. So in a way stuff like Twitter is "by geeks, for geeks" (not really just for developers – there’s a lot of "media" people on Twitter). Pretty much in the same way, a lot of people I work with (at direct contact, everyday) only found out about LinkedIN during this year (2008!). I joined Orkut and LinkedIN in 2004. Orkut was in private beta, back then. A lot of this stuff never becomes mainstream, some does. But it is cool to discover it when it gets born. How long did it take for Social Networking to become mainstream? So long that when it is mainstream for others, I have seen it for so long that I am even getting tired of it.
"[…] some of them we will be putting out on officelabs.com for the general public (you folks!) to try so we can understand how "normal" people would use these tools. Now of course, as we bloggers and blog-readers know, we’re not actually normal – you could even debate whether the blogosphere is more warped than the set of Microsoft employees, who comprise an interesting cross-section of job types, experiences, and cultures. But I digress. […]"
But I have been digressing, too, all along. As usual.
It’s friday night, I am quite tired and I can’t be asked of writing a long post. But I have not written much all week, not even updated my Twitter, and now I want to finish the week with at least some goodies. So this is the turn of a couple of Powershell commands/snippets/scripts that will count alerts and events generated each day: this information could help you understand the trends of events and alerts over time in a Management Group. It is nothing fancy at all, but they can still be useful to someone out there. In the past (MOM 2005) I used to gather this kind of information with SQL Queries against the operations database. But now, with Powershell, everything is exposed as objects and it is much easier to get information without really getting your hands dirty with the database 🙂
foreach ($datetime in $eventtimes){ $array += $datetime.timegenerated.date }
$array | Group-Object Date
Beware that these “queries” might take a long time to execute (especially the events one) depending on the amount of data and your retention policy.
This is of course just scratching the surface of the amount of amazing things you can do with Powershell in Operations Manager 2007. For this kind of information you might want to keep an eye on the official “System Center Operations Manager Command Shell” blog: http://blogs.msdn.com/scshell/
I just created an account for myself on the site… as you know I like PowerShell, so even if I usually prefer writing stuff in english, I will try to hang out there and see how can I contribute to it.
This was a VMWare “virtual appliance” with Ubuntu that I was using for testing. As I mostly use Virtual PC or Virtual Server, I found it annoying having to switch to VMWare player to use that specific machine, and I could not be asked to install a new one. So I converted the .VMDK to .VHD format (the other way around than it is described on this article ).
After that, I had to change GRUB’s configuration to inform it that the SCSI disk (/dev/sda1) was all of a sudden become an IDE one (/dev/hda1), and then I also had to reconfigure X.
On this website we use first or third-party tools that store small files (cookie) on your device. Cookies are normally used to allow the site to run properly (technical cookies), to generate navigation usage reports (statistics cookies) and to suitable advertise our services/products (profiling cookies). We can directly use technical cookies, but you have the right to choose whether or not to enable statistical and profiling cookies. Enabling these cookies, you help us to offer you a better experience. Cookie and Privacy policy