Now that this promise has been mantained, and the SCX providers have been released on Codeplex at http://xplatproviders.codeplex.com/ it should be finally possible to entirely build your own unsupported agent package, starting from source code, without having to modify the original package as I have shown earlier on this blog. Of course this will still be unsupported by Microsoft Product support, but will eventually work just fine! This is an extraordinary event in my opinion, as it is not a common event that Microsoft releases code as open source, especially when this is part of one of the product it sells. I suspect we will see more of this as we going forward.
Anyway, I have in the past posted a number of posts on my blog under this tag http://www.muscetta.com/tag/xplat/ (I will continue to use that tag going forward) which show/describe how I hacked/modified both the existing MPs AND the SCX agent package to let it run on unsupported distributions (and I think they are still useful as they show a number of techniques about how to test, understand and troubleshoot the Xplat agent a bit. In fact, I have first learned how to understand and modify the RedHat MPs to monitor CentOS and eventually even modified the RPM package to run on Ubuntu (which also works on Debian 5/Lenny), eventually, as you can see because I am now using it to monitor – from home, across the Internet – the machine running this blog:
After all, those experimentations with Xplat got me a fame of being a “Unix expert at Microsoft” (this expression still makes me laugh), as I was tweeting here:
The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.
cmdlet Get-Credential at command pipeline position 1 Supply values for the following parameters: Credential
But we do get this error:
Test-WSMan : The server certificate on the destination computer (virtubuntu.huis.dom:1270) has the following errors: The SSL certificate could not be checked for revocation. The server used to check for revocation might be unreachable.
The SSL certificate is signed by an unknown certificate authority. At line:1 char:11 + test-wsman <<<< -computer virtubuntu.huis.dom -port 1270 -authentication basic -credential (get-credential) -usessl + CategoryInfo : InvalidOperation: (:) [Test-WSMan], InvalidOperationException + FullyQualifiedErrorId : WsManError,Microsoft.WSMan.Management.TestWSManCommand
The credentials above have to be a unix login. Which we typed correctly. But we still can’t get thru, as the certificate used by the agent is not trusted by our workstation. This seems to be the “usual” issue I first faced when testing SCX with WINRM in beta1. At the time I simply dismissed it with the following sentence
[…] Of course you have to solve some other things such as DNS resolution AND trusting the self-issued certificates that the agent uses, first. Once you have done that, you can run test queries from the Windows box towards the Unix ones by using WinRM. […]
and I sincerely thought that it would explain pretty well… but eventually a lot of people got confused by this and did not know what to do, especially for the part that goes about trusting the certificate. Anyway, in the following posts I figured out you could pass the –skipCACheck parameter to WINRM… which solved the issue with having to trust the certificate (which is fine for testing, but I would not use that for automations and scripts running in production… as it might expose your credentials to man-in-the-middle attacks).
So it seems that with the Powershell cmdlets we are back to that issue, as I can’t find a parameter to skip the CA check. Maybe it is there, but with PSv2 not having been released yet, I don’t know everything about it, and the CTP documentation is not yet complete. Therefore, back to trusting the certificate.
Trusting the certificate is actually very simple, but it can be a bit tricky when passing those certs back and forth from unix to windows. So let’s make the process a bit clearer.
All of the SCX-agents certificates are ultimately signed by a key on the Management server that has discovered them, but I don’t currently know where that certificate/key is stored on the management server. Anyway, you can get it from the agent certificate – as you only really need the public key, not the private signing key.
Use WinSCP or any other utility to copy the certificate off one of the agents. You can find that in the /etc/opt/microsoft/scx/ssl location:
that scx-host-computername.pem is your agent certificate.
Copy it to the Management server and change its extension from .pem to .cer. Now Windows will be happy to show it to you with the usual Certificate interface:
We need to go to the “Certification Path” tab, select the ISSUER certificate (the one called “SCX-Certificate”):
then go to the “Details” tab, and use the “Copy to File” button to export the certificate.
After you have the certificate in a .CER file, you can add it to the “trusted root certification authorities” store on the computer you are running your powershell tests from.
So after you have trusted it, the same command as above actually works now:
cmdlet Get-Credential at command pipeline position 1 Supply values for the following parameters: Credential
wsmid : http://schemas.dmtf.org/wbem/wsman/identify/1/wsmanidentity.xsd lang : ProtocolVersion : http://schemas.dmtf.org/wbem/wsman/1/wsman.xsd ProductVendor : Microsoft System Center Cross Platform ProductVersion : 1.0.4-248
Ok, we can talk to it! Now we can do something funnier, like actually returning instances and/or calling methods:
This is far from exhaustive, but should get you started on a world of possibilities about automating diagnostics and responses with Powershell v2 towards the OpsMgr 2007 R2 Cross-Platform machines. Enjoy!
Disclaimer
The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.
Even if the backend for this feature is not yet documented, I was extremely curious to see how this had actually been implemented. Especially since it took a while to have this feature available for OpsMgr, I had the suspicion that it could not be as simple as one downloadable XML file, like the old MOM2005’s MPNotifier had been using in the past.
Therefore I observed the console’s traffic through the lens of my proxy, and got my answer:
So that was it: a .Net Web Service.
I tried to ask the web service itself for discovery information, but failed:
Since there is no WSDL available, but I badly wanted to interact with it, I had to figure out: what kind of requests would be allowed to it, how should they be written, what methods could they call and what parameters should I pass in the call. In order to get started on this, I thought I could just observe its network traffic. And so I did… I fired up Network Monitor and captured the traffic:
Microsoft Network Monitor is beautiful and useful for this kind of stuff, as it lets you easily identify which application a given stream of traffic belongs to, just like in the picture above. After I had isolated just the traffic from the Operations Console, I then saved those captures packets in CAP format and opened it again in Wireshark for a different kind of analysis – “Follow TCP Stream”:
This showed me the reassembled conversation, and what kind of request was actually done to the Web Service. That was the information I needed.
Ready to rock at this point, I came up with this Powershell script (to be run in OpsMgr Command Shell) that will:
1) connect to the web service and retrieve the complete MP list for R2 (this part is also useful on its own, as it shows how to interact with a SOAP web service in Powershell, invoking a method of the web service by issuing a specially crafted POST request. To give due credit, for this part I first looked at this PERL code, which I then adapted and ported to Powershell);
2) loop through the results of the “Get-ManagementPack” opsmgr cmdlet and compare each MP found in the Management Group with those pulled from the catalog;
3) display a table of all imported MPs with both the version imported in your Management Group AND the version available on the catalog:
Remember that this is just SAMPLE code, it is not meant to be used in production environment and it is worth mentioning again that OpsMgr2007 R2 this is BETA software at the time of writing, therefore this functionality (and its implementation) might change at any time, and the script will break. Also, at present, the MP Catalog web service still returns slightly older MP versions and it is not yet kept in sync and updated with MP Releases, but it will be ready and with complete/updated content by the time R2 gets released.
Disclaimer
The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.
I have to say that OpsMgr2007 R2 beta release notes explain the known issues, and I had no trouble whatsoever upgrading the windows part. It just took its time (I am running virtual machines in my test lab, that don’t have the best performance), but it went smoothly and without a glitch. In a couple of hours I had everything upgraded: databases, RMS, reporting, agents, gateway. All right then. The new purple icons in System Center look cute, and the new UI has some great stuff, such as a long-awaited way to update your management packs directly from the Internet, better display of Overrides (kind of what we used to rely on Override Explorer for)… and A LOT more new stuff that I won’t be wasting my Sunday writing about since everybody else has already done it two days ago:
Therefore let’s get back to my upgrade, which is a lot more interesting (to me) than the marketing tam-tam 🙂
As part of the upgrade to R2, I had to first uninstall the Xplat beta refresh bits, which I had installed, including all Unix Management Packs. Including my CentOS Management Pack I had improvised.
So this is the new start page of the integrated Discovery Wizard:
Looks nice and integrates the functionality of discovering and deploying Windows machines, SNMP Devices, and Unix/Linux machines.
Of course, my CentOS machine would not be discovered, and showed up as an unsupported platform. Of course my old Management Pack I had hacked together in XPlat Beta 1 did not work anymore. Therefore, I figured out I had to see what changes were there, and how to make it work again (of course it IS possible – It is NOT SUPPORTED, but I don’t care, as long as it works).
Since the existing agent could not be discovered, the first step I took was logging on the Linux box, un-install the old agent, and install the new one:
There I tried to discover again, but of course it still failed.
At that point I started taking a look at the new layout of things on the unix side. Most stuff is located in the same directories where beta1 was installed, and there are a bunch of useful commands under /opt/microsoft/scx/bin/tools. You can check out the Open Pegasus version used:
[root@centos tools]# ./scxcimconfig –version Version 2.7.0
Let’s take a look at what SCX classes we have available:
./scxcimcli nc -n root/scx -di |grep SCX | sort
Nice. That’s the stuff we will be querying over WS-Man from the Management Server.
So let’s look at the OS Discovery, and we test it from the OpsMgr 2007 box:
At first I assumed this worked like in Beta1, therefore I exported RedHat management pack and I made my own version of it, replacing the strings it is expecting to find to discover CentOS instead than Redhat.
While the MP was syntactically correct and would import fine, the Discovery wizard still didn’t work.
I took one more look at the discoveries in the MP, and I found there are two more, targeted to Management Server, which is probably what gets used by the Discovery Wizard to understand what kind of agent kit needs to be deployed.
So basically this discovery checks for the returned value from the module to determine if the discovered platform is a supported one:
But how does the module get its data?
Look at the layout of the /AgentManagement/UnixAgents folder on the Management Server:
That’s it: GetOSVersion.sh – a shell script. A nice, open, clear text, hackable shell script. Let’s take a look at it:
So that’s it, and how my modification looks like. What happens during the discovery wizard is that we probably copy the script over SCP to the box, execute it, look at a number of things, and return the discovery data we need.
So after modifying the script… here we go. The Wizard now thinks CentOS is Red Hat, and can install an agent on it:
Only when the Management Server discovery finally considers the CentOS machine worth managing, then the other discoveries that use WS-Man queries start kicking in, like the old one did, and find the OS objects and all the other hosted objects. In order for this to work you don’t only need to hack the shell script, but to have a hacked MP – the “regular” Red Har one won’t find CentOS, which is and remains an UNSUPPORTED platform.
Disclaimer
The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.
I have been on holiday in the meantime… but the T-Shirt had arrived and was waiting for me in my letterbox in the office !! How cool is that???
So today I am walking around the Rome office in it… and I am looking at people’s faces: you need to understand that Italian dress code is more or less the opposite of how people usually dress in Redmond… Italy is historically more formal, and it would be the norm to dress fancy… one would definitely look BAD here if he would show up in sandals in the office… and VERY bad going on sandals to a customer… 🙂
After that, I took a different approach: I kept sending the mail on the SAME server as the backup, and using IMAP I could DRAG-and-DROP the backup attachment from the mailbox on one server to the mailbox on another server (=GMail). They did not mark me as a spammer that way, of course. So that worked for a while, but then I got tired of doing this manually.
So the following ruby script is the way I automated the “move offsite” part of that backup. For completeness, I will give the due credits about who set me on the right track: I started off by this example by Ryan.
#!/usr/bin/env rubybegin_=Time.now#includesrequire'net/imap'##Source Info$SRCSERVER="mail.muscetta.com"$SRCPORT=143$SRCSSL=false$SRCUSERNAME="daniele"$SRCPASSWORD=""$SRCFOLDER="INBOX.Backups"##Destination Info$DSTSERVER="imap.gmail.com"$DSTPORT=993$DSTSSL=true$DSTUSERNAME="muscetta@gmail.com"$DSTPASSWORD=""$DSTFOLDER="Backup"#connect to sourceputs"connecting to source server #{$SRCSERVER}... nn"srcimap=Net::IMAP.new($SRCSERVER,$SRCPORT,$SRCSSL)srcimap.login($SRCUSERNAME,$SRCPASSWORD)srcimap.select($SRCFOLDER)#connect to destinationputs"connecting to destination server #{$DSTSERVER}... nn"dstimap=Net::IMAP.new($DSTSERVER,$DSTPORT,$DSTSSL)dstimap.login($DSTUSERNAME,$DSTPASSWORD)dstimap.select($DSTFOLDER)# Loop through all messages in the source folder.uids=srcimap.uid_search(['ALL'])ifuids.length>0$count=uids.lengthputs"found #{$count} messages to move... nn"srcimap.uid_fetch(uids,['ENVELOPE']).eachdo|data|mid=data.attr['ENVELOPE'].message_id# Download the full message body from the source folder.puts"reading message... #{mid}"msg=srcimap.uid_fetch(data.attr['UID'],['RFC822','FLAGS','INTERNALDATE']).first# Append the message to the destination folder, preserving flags and internal timestamp.puts"copying message #{mid} to destination..."dstimap.append($DSTFOLDER,msg.attr['RFC822'],msg.attr['FLAGS'],msg.attr['INTERNALDATE'])#delete the msgputs"deleting messsage #{mid}..."srcimap.uid_store(data.attr['UID'],'+FLAGS',[:Deleted])srcimap.expungeend#disconnectdstimap.closesrcimap.closeendtotal_time=Time.now-begin_puts"Done. RunTime: #{total_time} sec. nn"
I am testing the beta bits of the cross-platform extensions that were released on Microsoft Connect
This post wants to describe my limited testing so far – I hope this can benefit/help everyone testing the beta for some stuff that might currently not be incredibly clear – unless you attended the MMS class, at least :-))
I started out with the White Paper that has been posted on the web, which describes the architecture pretty well, but from a higher level (with diagrams and the like). Then I downloaded the beta bits, which contain another document about setting the thing up. It is pretty well done, to be honest (especially if you consider that it is beta documentation for a beta product!), but it does not really go all the way down to troubleshooting things a lot, yet. I will try to cover some of that here.
I installed the agent manually – it’s just a RPM package, not much that can go wrong with that. There is a reason why I did not use the push discovery and deployment of the agent, which you will figure out reading later on. Once installed, I tried to figure out how things were looking like on the linux machine. It is all pretty understandable, after all, if you look around on the machine (documented or not, linux and open source stuff is easy to figure out by reading configuration files and the like, and by searching on the web).
Basically the “agent” is not properly an “agent” the way the windows agent is, since it does not really “sends” stuff to the Management Server on its own: It consists of a couple of services/daemons, based on existing opensource projects, but configured in their own folder, with their own name, and using different ports than a standard install of those, not to conflict with possible existing ones on those machines.
The Management Service uses these services remotely (similar to doing agentless monitoring towards a windows box) using these services. The two services are:
I still have to delve into them properly as I would like to, but I already figured out a bunch of interesting things by quickly looking at them.
Agent Communication someone must have decided to “recycle” the 1270 port number that was used in MOM2005 🙂 Basically openwsman listens as a SSL listener (with basic auth – connected via PAM module with the “regular” unix /etc/passwd users, so you can authenticate as those without having to define specific users for the service). So all that happens is that the Management Server asks things/executes WS-Man queries and commands on this channel. The Management Server connects every time to the agent on port 1270 using SSL, authenticates as “root” (or as the specified “Action Account”) and does its stuff, or asks the agent to do it. So the communication is happening from the Management Server to the agent… not the other way around like it happens with Windows “agents”. That’s why it feels to me more like an “agentless” thing, at least for what concerns the “direction” of traffic and who does the actual querying.
For the rest, the provided Management Packs have “normal” discoveries and “normal” monitors. Pretty much like the Windows Management Packs often discover thing by querying WMI, here they use WS-Man to run CIM queries against the Unix boxes.
The Service Model is totally cool to actually *SEE* in action, don’t you think so ?
A few more debugging/troubleshooting information:
I searched a bit and found the openwsman.org documentation and forum to be useful to figure some things out. For example I banged my head a few times before managing to actually TEST a query from windows to linux using WINRM. This document helped a lot.
Of course you have to solve some other things such as DNS resolution AND trusting the self-issued certificates that the agent uses, first. Once you have done that, you can run test queries from the Windows box towards the Unix ones by using WinRM.
For example, this is how I tested what the discovery for a Linux RedHat Computer type should be returning (I read that by opening the MP in authoring console, as one would usually do for any MP):
If you need to test the query directly *ON* the linux box (querying the CIMD instead than WSMAND), the WBEMEXEC utility is packaged with the agent (under /opt/microsoft/scx/bin/tools ). It is not as easy as some windows administrators (that have used WBEMTEST or WMI Tools in the past) would hope, but not even that bad. Just to run a few queries to the CIM daemon locally it is not really interactive, so you need to create a XML file that looks like the following (basically you build the RAW request the way the CIMD accepts it):
<?xml version=”1.0″ ?>
<CIM CIMVERSION=”2.0″ DTDVERSION=”2.0″>
<MESSAGE ID=”50000″ PROTOCOLVERSION=”1.0″>
<SIMPLEREQ>
<IMETHODCALL NAME=”EnumerateInstanceNames”>
<LOCALNAMESPACEPATH>
<NAMESPACE NAME=”root”/>
<NAMESPACE NAME=”scx”/>
</LOCALNAMESPACEPATH>
<IPARAMVALUE NAME=”ClassName”>
<CLASSNAME NAME=”SCX_OperatingSystem”/>
</IPARAMVALUE>
</IMETHODCALL>
</SIMPLEREQ>
</MESSAGE>
</CIM>
Once you have made such a file, you can execute the query in the file with the tool like the following:
As you can see from here, CIMD uses HTTP already. This differs from Windows’ WMI that uses RPC/DCOM. In a way, this is much simpler to troubleshoot, and more firewall-friendly.
I have not really found an activity or debug log for any of those components, yet… but in the end they are not doing anything ON THEIR OWN, unless asked by the MS…. So the “healthservice” logic is all on the MS anyway. Errors about failed discoveries, permissions of the Action Account user, and anything else will be logged by the HealthService on the Windows machine (the Management Server) that is actually performing monitoring towards the Unix box.
It really is *just* getting the WMI and WinRM-equivalent layer on linux/Unix up and running– after that, everything is done from windows anyway!
After this common management infrastructure has been provided, 3rd parties will be facilitated in writing *just* MPs, without having to worry about the TRANSPORT of information anymore.
As you have probably noticed from the screenshots and commandlines, I don’t have a “real” Redhat Enterprise Linux or “supported” linux distribution… Therefore I started my testing using CentOS 5 (which is very similar to RHEL 5) – the agent installed fine as you can see, but I was not getting anything really “discovered” – the MP had only found a “linux computer” but was not finding any “RedHat” or “SuSe” or any other “Operating System” instances… and if you are somewhat familiar with the way Operations Manager targeting works, you would understand that monitors are targeted at object classes. If I don’t have any instance of those objects being discovered, NO MONITORING actually happens, even if the infrastructure is in place and the pieces are talking to each other:
Therefore my machine was not being monitored.
In the end, I actually even got it to work, but I had to create a new Management Pack (exporting and modifying the RHEL5 one as a base) that would actually search for different Property values and discover CentOS instead as if it were RedHat:
After importing my hacked Management Pack the machine started to be monitored. Here you can see Health Explorer in all of its glory:
Of course this is a hack I made just to have a test setup somewhat working and to familiarize myself with the SCX components. It is not guaranteed that my Management pack actually works on CentOS the way it is supposed to work and that there aren’t other – more subtle – differences between RedHat and CentOS that will make it fail. I only modified a couple of Discoveries to let it discover the “Operating System” instance… everything else should follow, but not necessarily. One difference you see already in the screenshot above is that I am not yet seeing the hardware being monitored, so my hack is already only partially working and it is definitely something that won’t be supported, so I cannot provide it here. Also, this is a beta, so I I think that the Management Packs will be re-released with following beta versions, and this change is something that would need to be re-done all over again. Also, the unsupported distribution is the reason why I installed the agent manually in the first place, as the “Discovery Wizard” would not really “agree” to go and let me install the agent remotely on an unsupported “platform!”.
The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION.
It is interesting to see that a bunch of open source projects written on and for the Microsoft platform grows and grows, and also nice to see that a lot of Microsoft employees are very active and aware of the open source ecosystem, rather than being stuck with only what the company makes. Phil Haack, in a post about an interview to Brad Wilson, wisely writes:
"[…] What I particularly liked about this post was the insight Brad provides on the diverse views of open source outside and inside of Microsoft as well as his own personal experience contributing to many OSS projects. It’s hard for some to believe, but there are developers internal to Microsoft who like and contribute to various open source projects. […]"
"[…] Hey. My name is Ariel and I’m the person you thought would never work at MSFT […]".
In fact, just as I do, she is running that blog on WordPress, posting her photos on Flickr, using a RSS feed on Feedburner and in general using a bunch of things that are out there that might be seen as "competing" with what Microsoft makes. In fact, this attitude towards other products and vendors on the market is what I am mainly interested in. Should we only use flagship products? Sure, when they help us, but not necessarily. Who cares? People’s blogs are not, as someone would like them to be, a coordinated marketing effort. This is about real people, real geeks, who just want to share and communicate personal ideas and thoughts. I had a blog before being at Microsoft, after all. Obviously I had exposure to competing products. My server was running LAMP on Novell Netware in 2002 – after which I moved it to Linux. It is not a big deal. And if I try to put things in perspective, in fact, this is turning out to be an advantage. I am saying this, as the latest news about interoperability comes from MMS (Microsoft Management Summit): and that is the announcement that System Center Operations Manager will monitor Linux natively. I find this to be extremely exciting, and a step in the right direction… to say it all I am LOVING this!!! But at the same time I see some other colleagues in technical support that are worrying and being scared by this – "if we do monitor Linux and Unix, we are supposed to have at least some knowledge on those systems", they are asking. Right. We probably do. At the moment there are probably only a limited number of people that actually can do that, at least in my division. But this is because in the past they must have sacrificed their own curiosity to become "experts" in some very narrow and "specialized" thing. Here we go. On the opposite, I kept using Linux – even when other "old school" employees would call me names. All of a sudden, someone else realizes my advantage. …but a lot of geeks already understood the power of exploration, and won’t stop defining people by easy labels. Another cool quote I read the other day is what Jimmy Schementi has written in his Flickr profile:
"[…] I try to do everything, and sometimes I get lucky and get good at something […]".
Reading on his blog it looks like he also gave up on trying to write a Twitter plugin for MSNLive Messenger (or maybe he never tried, but at least I wanted to do that, instead) and wrote it for Pidgin instead. Why did he do that ? I don’t know, I suppose because it was quicker/easier – and there were API’s and code samples to start from.
The bottom line, for me, is that geeks are interested in figuring out cool things (no matter what language or technology they use) and eventually communicating them. They tend to be pioneers of technologies. They try out new stuff. Open Source development is a lot about agility and "trying out" new things. Another passage of Brad’s interview says:
"[…] That’s true–the open source projects I contribute to tend to be the “by developer, for developer” kind, although I also consume things that are less about development […] Like one tool that I’ve used forever is the GIMP graphics editor, which I love a lot".
That holds true, when you consider that a lot of these things are not really mainstream. Tools made "by developer, for developer" are usually a sort of experimental ground. Like Twitter. Every geek is talking about Twitter these days, but you can’t really say that it is mainstream. Twitter has quite a bunch of interesting aspects, though, and that’s why geeks are on it. Twitter lets me keep up-to-date quicker and better (and with a personal, conversational touch) even better than RSS feeds and blogs do. Also, there are a lot of Microsofties on Twitter. And the cool thing is that yo can really talk to everybody, at any level. Not just everybody "gets" blogs, social networks, and microblogging. Of course you cannot expect everybody to be on top of the tech news, or use experimental technologies. So in a way stuff like Twitter is "by geeks, for geeks" (not really just for developers – there’s a lot of "media" people on Twitter). Pretty much in the same way, a lot of people I work with (at direct contact, everyday) only found out about LinkedIN during this year (2008!). I joined Orkut and LinkedIN in 2004. Orkut was in private beta, back then. A lot of this stuff never becomes mainstream, some does. But it is cool to discover it when it gets born. How long did it take for Social Networking to become mainstream? So long that when it is mainstream for others, I have seen it for so long that I am even getting tired of it.
"[…] some of them we will be putting out on officelabs.com for the general public (you folks!) to try so we can understand how "normal" people would use these tools. Now of course, as we bloggers and blog-readers know, we’re not actually normal – you could even debate whether the blogosphere is more warped than the set of Microsoft employees, who comprise an interesting cross-section of job types, experiences, and cultures. But I digress. […]"
But I have been digressing, too, all along. As usual.
All in all, even being still under heavy development, what Miguel de Icaza has achieved (with moonlight, just like with mono) is amazing.
After I posted the above picture on Flickr, John Montgomery was amazed to see PopFly (his creature) working on moonlight, and he linked to me from his blog.
I am seriously considering giving Asirra a try. It is an interesting project from Microsoft Research for an HIP (Human Interaction Proof) that uses info from petfinder.com to let users set apart pictures of dogs from those of cats. There is also a WordPress plugin, in the best and newest “we want to interoperate” fashion that we are finally getting at Microsoft (this has always been the way to go, IMHO, and BTW).
On this website we use first or third-party tools that store small files (cookie) on your device. Cookies are normally used to allow the site to run properly (technical cookies), to generate navigation usage reports (statistics cookies) and to suitable advertise our services/products (profiling cookies). We can directly use technical cookies, but you have the right to choose whether or not to enable statistical and profiling cookies. Enabling these cookies, you help us to offer you a better experience. Cookie and Privacy policy