Three quarters of 2015, my IT career and various ramblings

September is over. The first three quarters of 2015 are over.
This has been a very important year so far – difficult, but revealing. Everything has been about change, healing and renewal.

We moved back to Europe first, and you might have now also read my other post about leaving Microsoft, more recently.

This was a hard choice – it took many months to reach the conclusion this is what I needed to do.

Most people have gone thru strong programming: they think you have to be ‘successful’ at something. Success is externally defined, anyhow (as opposed to satisfaction which we define ourselves) and therefore you are supposed to study in college a certain field, then use that at work to build your career in the same field… and keep doing the same thing.

I was never like that – I didn’t go to college, I didn’t study as an ‘engineer’. I just saw there was a market opportunity to find a job when I started, studied on the job, eventually excelled at it. But it never was *the* road. It just was one road; it has served me well so far, but it was just one thing I tried, and it worked out.
How did it start? As a pre-teen, I had been interested in computers, then left that for a while, did ‘normal’ high school (in Italy at the time, this was really non-technological), then I tried to study sociology for a little bit – I really enjoyed the Cultural Anthropology lessons there, and we were smoking good weed with some folks outside of the university, but I really could not be asked to spend the following 5 or 10 years or my life just studying and ‘hanging around’ – I wanted money and independence to move out of my parent’s house.

So, without much fanfare, I revived my IT knowledge: upgraded my skill from the ‘hobbyist’ world of the Commodore 64 and Amiga scene (I had been passionate about modems and the BBS world then), looked at the PC world of the time, rode the ‘Internet wave’ and applied for a simple job at an IT company.

A lot of my friends were either not even searching for a job, with the excuse that there weren’t any, or spending time in university, in a time of change, where all the university-level jobs were taken anyway so that would have meant waiting even more after they had finished studying… I am not even sure they realized this until much later.
But I just applied, played my cards, and got my job.

When I went to sign it, they also reminded me they expected hard work at the simplest and humblest level: I would have to fix PC’s, printers, help users with networking issues and tasks like those – at a customer of theirs, a big company.
I was ready to roll up my sleeves and help that IT department however I would be capable of, and I did.
It all grew from there.

And that’s how my IT career started. I learned all I know of IT on the job and by working my ass off and studying extra hours and watching older/more expert colleagues and making experience.

I am not an engineer.
I am, at most, a mechanic.
I did learn a lot of companies and the market, languages, designs, politics, the human and technical factors in software engineering and the IT marketplace/worlds, over the course of the past 18 years.

But when I started, I was just trying to lend a honest hand, to get paid some money in return – isn’t that what work was about?

Over time IT got out of control. Like Venom, in the Marvel comics, that made its appearance as a costume that SpiderMan started wearing… and it slowly took over, as the ‘costume’ was in reality some sort of alien symbiotic organism (like a pest).

You might be wondering what I mean. From the outside I was a successful Senior Program Manager of a ‘hot’ Microsoft product.
Someone must have mistaken my diligence and hard work for ‘talent’ or ‘desire of career’ – but it never was.
I got pushed up, taught to never turn down ‘opportunities’.

But I don’t feel this is my path anymore.
That type of work takes too much metal energy off me, and made me neglect myself and my family. Success at the expense of my own health and my family’s isn’t worth it. Some other people wrote that too – in my case I stopped hopefully earlier.

So what am I doing now?

First and foremost, I am taking time for myself and my family.
I am reading (and writing)
I am cooking again
I have been catching up on sleep – and have dreams again
I am helping my father in law to build a shed in his yard
We bought a 14-years old Volkswagen van that we are turning into a Camper
I have not stopped building guitars – in fact I am getting setup to do it ‘seriously’ – so I am also standing up a separate site to promote that activity
I am making music and discovering new music and instruments
I am meeting new people and new situations

There’s a lot of folks out there who either think I am crazy (they might be right, but I am happy this way), or think this is some sort of lateral move – I am not searching for another IT job, thanks. Stop the noise on LinkedIn please: I don’t fit in your algorithms, I just made you believe I did, all these years.

Repost: Useful SetSPN tips

I just saw that my former colleague (PFE) Tristan has posted an interesting note about the use of SetSPN “–A” vs SetSPN “–S”. I normally don’t repost other people’s content, but I thought this would be useful as there are a few SPN used in OpsMgr and it is not always easy to get them all right… and you can find a few tricks I was not aware of, by reading his post.

Check out the original post at http://blogs.technet.com/b/tristank/archive/2011/10/10/psa-you-really-need-to-update-your-kerberos-setup-documentation.aspx

Does anyone have a new System Center sticker for me?

Does anyone have a new System Center sticker?

I got this sticker last APRIL at MMS2010 in JUST ONE COPY, and I waited till I got a NEW laptop in SEPTEMBER to actually use that…
It also took a while to stick it on properly (other than to re-install the PC as I wanted…),  but this week they told me that, for an error, I got given the wrong machine (they did it all themselves, tho – I did not ask for any specific one) and this one needs to be replaced!!!!

This is WORSE than any hardware FAILure, as the machine just works very well and I was expecting to keep it for the next two years 🙁

Can anyone be so nice to send me one of those awesome stickers again? 🙂

Programmatically Check for Management Pack updates in OpsMgr 2007 R2

One of the cool new features of System Center Operations Manager 2007 R2 is the possibility to check and update Management Packs from the catalog on the Internet directly from the Operators Console:

Select Management Packs from Catalog

Even if the backend for this feature is not yet documented, I was extremely curious to see how this had actually been implemented. Especially since it took a while to have this feature available for OpsMgr, I had the suspicion that it could not be as simple as one downloadable XML file, like the old MOM2005’s MPNotifier had been using in the past.

Therefore I observed the console’s traffic through the lens of my proxy, and got my answer:

ISA Server Log

So that was it: a .Net Web Service.

I tried to ask the web service itself for discovery information, but failed:

WSDL

Since there is no WSDL available, but I badly wanted to interact with it, I had to figure out: what kind of requests would be allowed to it, how should they be written, what methods could they call and what parameters should I pass in the call. In order to get started on this, I thought I could just observe its network traffic. And so I did… I fired up Network Monitor and captured the traffic:

Microsoft Network Monitor 3.2

Microsoft Network Monitor is beautiful and useful for this kind of stuff, as it lets you easily identify which application a given stream of traffic belongs to, just like in the picture above. After I had isolated just the traffic from the Operations Console, I then saved those captures packets in CAP format and opened it again in Wireshark for a different kind of analysis – “Follow TCP Stream”:

Wireshark: Follow TCP Stream

This showed me the reassembled conversation, and what kind of request was actually done to the Web Service. That was the information I needed.

Ready to rock at this point, I came up with this Powershell script (to be run in OpsMgr Command Shell) that will:

1) connect to the web service and retrieve the complete MP list for R2 (this part is also useful on its own, as it shows how to interact with a SOAP web service in Powershell, invoking a method of the web service by issuing a specially crafted POST request. To give due credit, for this part I first looked at this PERL code, which I then adapted and ported to Powershell);

2) loop through the results of the “Get-ManagementPack” opsmgr cmdlet and compare each MP found in the Management Group with those pulled from the catalog;

3) display a table of all imported MPs with both the version imported in your Management Group AND the version available on the catalog:

Script output in OpsMgr Command Shell

Remember that this is just SAMPLE code, it is not meant to be used in production environment and it is worth mentioning again that OpsMgr2007 R2 this is BETA software at the time of writing, therefore this functionality (and its implementation) might change at any time, and the script will break. Also, at present, the MP Catalog web service still returns slightly older MP versions and it is not yet kept in sync and updated with MP Releases, but it will be ready and with complete/updated content by the time R2 gets released.

Disclaimer

The information in this weblog is provided “AS IS” with no warranties, and confers no rights. This weblog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided “AS IS” without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.
THIS WORK IS NOT ENDORSED AND NOT EVEN CHECKED, AUTHORIZED, SCRUTINIZED NOR APPROVED BY MY EMPLOYER, AND IT ONLY REPRESENT SOMETHING WHICH I’VE DONE IN MY FREE TIME. NO GUARANTEE WHATSOEVER IS GIVEN ON THIS. THE AUTHOR SHALL NOT BE MADE RESPONSIBLE FOR ANY DAMAGE YOU MIGHT INCUR WHEN USING THIS INFORMATION. The solution presented here IS NOT SUPPORTED by Microsoft.

A Rant about Openness

It is interesting to see that a bunch of open source projects written on and for the Microsoft platform grows and grows, and also nice to see that a lot of Microsoft employees are very active and aware of the open source ecosystem, rather than being stuck with only what the company makes. Phil Haack, in a post about an interview to Brad Wilson,  wisely writes:

"[…] What I particularly liked about this post was the insight Brad provides on the diverse views of open source outside and inside of Microsoft as well as his own personal experience contributing to many OSS projects. It’s hard for some to believe, but there are developers internal to Microsoft who like and contribute to various open source projects. […]"

In fact, being made by Microsoft people or not, the list of open source software on CodePlex keeps growing too. Mentioning CodePlex and interviews, another interesting one is that of Sara Ford, Program Manager for CodePlex posted on Microspotting. But Microspotting is awesome in general. My favorite quote by her:

"[…] Hey. My name is Ariel and I’m the person you thought would never work at MSFT […]".

In fact, just as I do, she is running that blog on WordPress, posting her photos on Flickr, using a RSS feed on Feedburner and in general using a bunch of things that are out there that might be seen as "competing" with what Microsoft makes. In fact, this attitude towards other products and vendors on the market is what I am mainly interested in. Should we only use flagship products? Sure, when they help us, but not necessarily. Who cares? People’s blogs are not, as someone would like them to be, a coordinated marketing effort. This is about real people, real geeks, who just want to share and communicate personal ideas and thoughts. I had a blog before being at Microsoft, after all. Obviously I had exposure to competing products. My server was running LAMP on Novell Netware in 2002 – after which I moved it to Linux. It is not a big deal. And if I try to put things in perspective, in fact, this is turning out to be an advantage. I am saying this, as the latest news about interoperability comes from MMS (Microsoft Management Summit): and that is the announcement that System Center Operations Manager will monitor Linux natively. I find this to be extremely exciting, and a step in the right direction… to say it all I am LOVING this!!! But at the same time I see some other colleagues in technical support that are worrying and being scared by this – "if we do monitor Linux and Unix, we are supposed to have at least some knowledge on those systems", they are asking. Right. We probably do. At the moment there are probably only a limited number of people that actually can do that, at least in my division. But this is because in the past they must have sacrificed their own curiosity to become "experts" in some very narrow and "specialized" thing. Here we go. On the opposite, I kept using Linux – even when other "old school" employees would call me names. All of a sudden, someone else realizes my advantage.  …but a lot of geeks already understood the power of exploration, and won’t stop defining people by easy labels. Another cool quote I read the other day is what Jimmy Schementi has written in his Flickr profile:

"[…] I try to do everything, and sometimes I get lucky and get good at something […]".

Reading on his blog it looks like he also gave up on trying to write a Twitter plugin for MSNLive Messenger (or maybe he never tried, but at least I wanted to do that, instead) and wrote it for Pidgin instead.  Why did he do that ? I don’t know, I suppose because it was quicker/easier – and there were API’s and code samples to start from.

The bottom line, for me, is that geeks are interested in figuring out cool things (no matter what language or technology they use) and eventually communicating them. They tend to be pioneers of technologies. They try out new stuff. Open Source development is a lot about agility and "trying out" new things. Another passage of Brad’s interview says:

"[…] That’s true–the open source projects I contribute to tend to be the “by developer, for developer” kind, although I also consume things that are less about development […] Like one tool that I’ve used forever is the GIMP graphics editor, which I love a lot".

That holds true, when you consider that a lot of these things are not really mainstream. Tools made "by developer, for developer" are usually a sort of experimental ground. Like Twitter. Every geek is talking about Twitter these days, but you can’t really say that it is mainstream. Twitter has quite a bunch of interesting aspects, though, and that’s why geeks are on it. Twitter lets me keep up-to-date quicker and better (and with a personal, conversational touch) even better than RSS feeds and blogs do. Also, there are a lot of Microsofties on Twitter. And the cool thing is that yo can really talk to everybody, at any level. Not just everybody "gets" blogs, social networks, and microblogging. Of course you cannot expect everybody to be on top of the tech news, or use experimental technologies. So in a way stuff like Twitter is "by geeks, for geeks" (not really just for developers – there’s a lot of "media" people on Twitter). Pretty much in the same way, a lot of people I work with (at direct contact, everyday) only found out about LinkedIN during this year (2008!). I joined Orkut and LinkedIN in 2004. Orkut was in private beta, back then. A lot of this stuff never becomes mainstream, some does. But it is cool to discover it when it gets born. How long did it take for Social Networking to become mainstream? So long that when it is mainstream for others, I have seen it for so long that I am even getting tired of it.

For some reason, geeks love to be pioneers. This is well expressed in a digression by Chris Pratley:

"[…] some of them we will be putting out on officelabs.com for the general public (you folks!) to try so we can understand how "normal" people would use these tools. Now of course, as we bloggers and blog-readers know, we’re not actually normal – you could even debate whether the blogosphere is more warped than the set of Microsoft employees, who comprise an interesting cross-section of job types, experiences, and cultures. But I digress. […]"

But I have been digressing, too, all along. As usual.

Why do developers tend to forget about people behind proxy servers ?

I know this is a very common issue.

I keep finding way too many software that claim to interact with Web 2.0 sites or services, and connect here or there…. still forgetting one basic simple rule, that is: letting people use a proxy.

Most programmers for some reasons just assume that since they are directly connected to the internet, everybody is. Which isn’t always the case. Most companies have proxies and will only let you out to port 80 – by using their proxy.

…which in turn is one of the reasons why most applications now “talk” and tunnel whatever application protocol on top of HTTP… still a lot of softwares simply “forget” or don’t care proving a simple checkbox “use proxy”, which will translate in two or three extra lines of code… three lines which I personally usually include in my projects, when I am not even a *developer*!! (but that might explain why I *think* of it… I come from a security and networking background :-))

I thought of writing this post after having read this post by Saqib Ullah.

Anyway. I keep finding this thing over and over again. Both in simple, hobbyist, sample and/or in complex, big, expensive enterprise software. Last time I got pissed off about a piece of code missing this feature was some days ago when testing http://www.codeplex.com/FacebookToolkit. The previous time was during Windows Vista beta-testing (I had found a similar issue in beta2, and had it fixed for RC1.)

Actually, I am being polite saying it is “missing a feature”. To be honest I think missing this “feature” would have to be considered a bug: every piece of software using HTTP *should* include the possibility to pass thorugh proxy (also, don’t forget about  AUTHENTICATED proxies), or the purpose of using HTTP in the first place is defeated!!

Developers!!! You have to remember people ARE behind proxies !!!!!

MOM 2005 Alerts to RSS feed

I am an RSS Addict, you know that.So I wanted an RSS Feed to show MOM Alerts. I have been thinking of it for a while, last year (or was it the year before?).
It seemed like a logical thing to me: alerts are created (and can be resolved – that is, expire), generally get sorted by the date and the time when they have been created, the look pretty much like a list. Also, many people like to receive mail notification when new alerts are generated.
So, if the alert can be sent to you (push), you could also get to it(pull).
Pretty much the same deal with receiving a mail or reading a newsgroup, or syndicating a feed.

At the time I looked around but it seemed like no one had something like this already done.
So I wrote a very simple RSS feed generator for MOM Alerts.
I did it quite an amount of time ago, just as an exercise.
Then, after a while, I figured out that the MOM 2005 Resource Kit had been updated to include such a utility!

Wow, I thought, they finally added what I have been thinking for a while. Might it be because I mentioned it on an private Mailing list ? Maybe. Maybe not. Who cares. Of course, if it is included in the resource kit it must be way cooler than the one I made, I though.
I really thought something along these lines, but never actually had the time to try it out.
I think I just sort of assumed it must have been cooler than the one I made, since it was part of an official package, while I am not a developer. So I basically forgot about the one I wrote, dismissing it as being crap without looking too much into it anymore.
Until today.
Today I actually tried to use the alert to RSS tool included in the resource kit, because a customer asked if there was any other way to get notified, other than receiving notification or using the console (or the console notifier).
So I looked at the resource kit’s Alert-to-RSS Utility.
My experience with it:
1) it is provided in source code form – which is ok if it was ALSO provided as source. Instead it is ONLY provided as source, and most admins don’t have Visual Studio installed or don’t know how to compile from the command line;
2) Even when they wanted to compile it, it includes a bug which makes it impossible to compile – solution in this newsgroup discussion;
3) if you don’t want to mess about with code since you are using a resource Kit tool (as opposed to something present in the SDK) you can even get it already compiled by someone from somewhere on the net, but that choice is about trust.

Anyway, one way or another, after it is finally set up…. surprise surprise!!!
It does NOT show a LIST of alerts (as I was expecting).
It shows a summary of how many alerts you have. basically it is an RSS feed made of a single item, and this single item tells you how many alerts you have. What is one supposed to do with such a SUMMARY? IMHO, it is useless the way it is. It is even worse than one of those feed that only contains the excerpt of the article, rather than the full article.
Knowing that I have 7 critical errors and 5 warning without actually knowing ANYTHING of them is pointless.
It might be useful for a manager, but not for a sysadmin, at least.

So I thought my version, even if coded crap, might be useful to someone because it gives you a list of alerts (those that are not resolved) and each one of them tells you the description of the alert, the machine tat generated it, and includes links to the actual alert in the web console, so you can click, go there, and start troubleshooting from within your aggregator!
My code does this. Anyway, since I am a crap coder, since I wrote it in only fifteen minutes more than a year ago, and since I don’t have time to fix it and make it nicer… it has several issues, and could be improved in a million ways, in particular for the following aspects:

  1. is currently depends on the SDK Database views – it could use the MOM Server API’s or the webservice instead;
  2. it uses SQL Security to connect to the DB – by default MOM does not allow this – it is suggested for the SQL instance hosting “OnePoint” to only use Windows Integrated Authentication.. so to make my code work you have to switch back to Mixed mode, and create a login in SQL that has permission to read the database. This is due to the fact that I’ve coded this in five minutes and I don’t know how to use delegation – if I was able to use delegation, I would… so that the end user accessing IIS would be the one connecting to the DB. If anybody wants to teach me how to do this, I will be most grateful.
  3. it could accept parameters as URL variables, so to filter out only events for a specific machine, or a specific resolution state, etc etc
  4. At present it uses RSS.Net to generate the feed. It could made independent from it, but I don’t really see why, and I quite like that library.

The code is just an ASP.Net page and its codebehind, no need to compile, but of course you need to change a couple of lines to match your webconsole address.
Also, you need to get RSS.NET and copy its library (RSS.Net.dll) in the /bin subfolder of the website directory where you place the RSSFeed generator page. I see that I wrote this with version 0.86, but any version should do, really.

Here is what it will look like:

AlertToRSS

And here’s the code of the page (two files):

Default.aspx

<%@ Page Language=”C#” AutoEventWireup=”true” CodeFile=”Default.aspx.cs” Inherits=”_Default” %>

Default.aspx.cs

using System;
using System.Data;
using System.Data.SqlClient;
using System.Configuration;
using System.Web;
using Rss;

public partial class _Default : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
string webconsoleaddress = “http://192.168.0.222:1272/AlertDetail.aspx?v=a&sid=” // must change to match your address

// Inizializza il Feed
RssChannel rssChannel = new RssChannel();
rssChannel.Title = “MOM Alerts”
rssChannel.PubDate = DateTime.Now;
rssChannel.Link = new Uri(“http://192.168.0.222:1272/rss/”); // must change to match your address
rssChannel.LastBuildDate = DateTime.Now;
rssChannel.Description = “Contains the latest Alerts”

// query – you might want to change the severity
string mySelectQuery = “SELECT ComputerName, Name, Severity, TimeRaised, RepeatCount, GUID FROM dbo.SDKAlertView WHERE Severity > 10 AND ResolutionState < 255”

// SQL Connection – must change SQL server, user name and password
SqlConnection conn = new SqlConnection(“Data Source=192.168.0.222;Initial Catalog=OnePoint;User ID=rss;Password=rss”);
SqlDataReader rdr = null;

try
{
conn.Open();
SqlCommand cmd = new SqlCommand(mySelectQuery, conn);
rdr = cmd.ExecuteReader();
while (rdr.Read())
{
RssItem rssItem = new RssItem();
string titleField = rdr[1].ToString();
rssItem.Title = titleField;
string url = webconsoleaddress + rdr[5];
rssItem.Link = new Uri(url.ToString());
string description = “<![CDATA[ <p><a xhref=\”” + rssItem.Link + “\”>” + rdr[1] + ” </a></p><br>” + “<br>Computer: ” + rdr[0] + “<br>Repeat Count: ” + rdr[4] + “<BR>Original ALert Time: ” + rdr[3];
rssItem.Description = description;
rssChannel.Items.Add(rssItem);
}

// Finalizza il feed
RssFeed rssFeed = new RssFeed();
rssFeed.Channels.Add(rssChannel);
Response.ContentType = “text/xml”
Response.ExpiresAbsolute = DateTime.MinValue;
rssFeed.Write(Response.OutputStream);
}
finally
{
if (rdr != null)
{
rdr.Close();
}

if (conn != null)
{
conn.Close();
}
}
}
}

Out-Blog!

[Edited again 25th November – Jachym gave me some suggestions and insights on the use of parameters, and I slightly changed/fixed the original code I had posted yesterday. There are still some more things that could be improved, of course, but I’ll leave them to the future, next time I’ll have time fot it (who knows when that will be?)]

This one is a post regarding my first test writing a cmdlet for PowerShell. After a few days since having change my blog’s title to “$daniele.rant | Out-Blog” (where Out-Blog was a fantasy cmdlet name, and the title just meant to mimick PowerShell syntax in a funny way), I stumbled across this wonderful blog post: http://blog.boschin.it/archive/2006/09/21/4375.aspx that describes how to use the assemblies of “Windows Live Writer”. Then I saw the light: I could actually implement an “Out-Blog” cmdlet. I am not sure what this could be useful for… but I thought it was funny to experiment with. I followed the HOW TO information on this other blog post to guide me through the coding: http://www.proudlyserving.com/archives/2005/10/lets_all_write_1.html

The result is the code that follows. you see is pretty much Boschin’s code wrapped into a cmdlet class. Nothing fancy. Just a test. I thought someone might find it interesting. It is provided “AS IS”, mainly for educational purpose (MINE, only mine…. I’m the one whose education is being improved, not you :-))

Out-Blog! 1 

 

using System;

using System.Collections.Generic;

using System.Text;

using System.Management.Automation;

using WindowsLive.Writer.BlogClient.Clients;

using WindowsLive.Writer.BlogClient;

using WindowsLive.Writer.CoreServices;

using WindowsLive.Writer.CoreServices.Settings;

using WindowsLive.Writer.Extensibility.BlogClient;

using Microsoft.Win32;

 

 

namespace LiveWriterCmdlet

{

[Cmdlet(“out”, “blog”, SupportsShouldProcess=true)]

 

public sealed class OutBlogCmdlet : Cmdlet

{

[Parameter(Position = 0, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string Title

{

get { return _title; }

set { _title = value; }

}

private string _title;

 

[Parameter(Position=1,Mandatory=true,ValueFromPipeline=true,ValueFromPipelineByPropertyName=true)]

[ValidateNotNullOrEmpty]

public string Text

{

get { return _text; }

set { _text = value; }

}

private string _text;

 

[Parameter(Position = 2, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string BlogApiEndPoint

{

get { return _blogapiendpoint; }

set { _blogapiendpoint = value; }

}

private string _blogapiendpoint;

 

[Parameter(Position = 3, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string UserName

{

get { return _username; }

set { _username = value; }

}

private string _username;

 

[Parameter(Position = 4, Mandatory = true, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string Password

{

get { return _password; }

set { _password = value; }

}

private string _password;

 

 

[Parameter(Position = 6, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public string ProxyAddress

{

get { return _proxyaddress; }

set { _proxyaddress = value; }

}

private string _proxyaddress;

 

[Parameter(Position = 7, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

[ValidateNotNullOrEmpty]

public int ProxyPort

{

get { return _proxyport; }

set { _proxyport = value; }

}

private int _proxyport;

 

[Parameter(Position = 8, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public string ProxyUserName

{

get { return _proxyusername; }

set { _proxyusername = value; }

}

private string _proxyusername;

 

[Parameter(Position = 9, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public string ProxyPassword

{

get { return _proxypassword; }

set { _proxypassword = value; }

}

private string _proxypassword;

 

[Parameter(Position = 10, Mandatory = false, ValueFromPipeline = false, ValueFromPipelineByPropertyName = true)]

public SwitchParameter Published

{

get { return _published; }

set { _published = value; }

}

private bool _published;

 

 

 

 

protected override void BeginProcessing()

{

base.BeginProcessing();

 

 

ApplicationEnvironment.Initialize();

if ((ProxyAddress != null) | (ProxyAddress != “”))

{

WebProxySettings.ProxyEnabled = true;

WebProxySettings.Hostname = ProxyAddress;

WebProxySettings.Port = ProxyPort;

WebProxySettings.Username = ProxyUserName;

WebProxySettings.Password = ProxyPassword;

 

}

else

{

WebProxySettings.ProxyEnabled = false;

}

 

}

 

 

 

 

protected override void ProcessRecord()

{

if (ShouldProcess(Text))

{

ISettingsPersister persister = new RegistrySettingsPersister(Registry.CurrentUser, @”Software\Windows Live Writer”);

IBlogCredentials credentials = new BlogCredentials(new SettingsPersisterHelper(persister));

IBlogCredentialsAccessor credentialsAccessor = new BlogCredentialsAccessor(“dummy-value”, credentials);

 

credentials.Username = UserName;

credentials.Password = Password;

 

MovableTypeClient client = new MovableTypeClient(new Uri(BlogApiEndPoint), credentialsAccessor, PostFormatOptions.Unknown);

 

 

BlogPost MyPost = new BlogPost();

MyPost.Title = Title;

MyPost.Contents = Text;

client.NewPost(“dummy-value”, MyPost, Published);

 

WriteVerbose(“Posted Successfully.”);

 

}

 

}

 

}

}

Time Capsule

Yahoo has done it again. Yet another cool photographic site: Time Capsule.
They show every day that they really GET the community thing. Thumbs up for them.

On a side note, I honestly ignore why do you need to UPLOAD photos there and you can’t just LINK or REFERENCE photos you’ve already uploaded on Flickr (isn’t it a site they bought ?). So I don’t see the effort in any INTEGRATION here. I don’t get why. If I had a photo platform like Flickr I would use its Web API to let registered users just “PASS OVER” some pictures from one site to the other.

But, hey…. regardless…..it looks really COOL. It really does. More cool than actually USEFUL (it reminds me of Intrusion Detection Systems… but I digress), but that’s how this amount of community things are. It’s not useful for your business, but it is good for your heart.

On this website we use first or third-party tools that store small files (cookie) on your device. Cookies are normally used to allow the site to run properly (technical cookies), to generate navigation usage reports (statistics cookies) and to suitable advertise our services/products (profiling cookies). We can directly use technical cookies, but you have the right to choose whether or not to enable statistical and profiling cookies. Enabling these cookies, you help us to offer you a better experience. Cookie and Privacy policy