tag:blogger.com,1999:blog-44234009439695185212024-03-05T00:43:18.183-08:00AbstractLabsBlog focused on software development, configuration management, Amazon, Microsoft and related technologiesAnonymoushttp://www.blogger.com/profile/02923461543579951804noreply@blogger.comBlogger40125tag:blogger.com,1999:blog-4423400943969518521.post-52168419226092645262023-06-15T09:34:00.000-07:002023-06-15T09:34:39.161-07:00Provisioning Cross-Account Dependencies in Amazon Account Factory for Terraform<p>Before we dive into setting up an additional provider it might help to go over a little of the project structure for Amazon Account Factory for Terraform (AFT). When you create a new customization script. When you copy the template directory in your project you will notice in your terraform folder two jinja files these files act as templates that work with the AFT pipeline to properly configure your terraform script for execution. By default these files come pre-configured with one provider in the template this provider will become the provider to your target account when executing the pipeline. jinja uses <tt>{{ variable }}</tt> to replace the values in the file to those ambient in the execution pipeline. So what if you need to configure a cross-account dependency between your account and another account?</p>
<p>Configuring a 2nd provider is pretty straight forward simply copy the provider block and paste it into the template file and replace the <em>role_arn</em> with a role to the secondary account. If using AFT in other accounts you could assume the /AWSAFTExecution role in the secondary account to gain administative access for the terraform provider and then set your alias in the second block to something you can reference in your script. That's it now you can apply changes to multiple accounts via the customization script</p>
<pre>
provider "aws" {
region = "us-east-1"
alias = "*YOURALIASHERE*"
assume_role {
role_arn = "arn:aws:iam::*YOURACCOUNTHERE*:role/AWSAFTExecution"
}
default_tags {
tags = {
managed_by = "AFT"
}
}
}
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4423400943969518521.post-84908707019914886262023-04-19T15:31:00.008-07:002023-04-19T15:36:21.377-07:00Using Terraform Remote Modules Sourced From CodeCommit using HTTPS (GRC) with AWS Account Factory (AFT) Workflow<p>
AWS Account Factory is a robust templating framework allowing users to apply account configuration using terraform uniformly across an organization utilizing multiple accounts and consolidated billing. Currently, there is a limitation preventing the inclusion of codecommit modules in the customization terraform script, which makes it difficult to compartmentalize the account customization scripts. Amazon recommends using HTTPS (GRC) to access CodeCommit; this method allows you to use AWS credentials to access the repository, negating the need for usernames, passwords, and keys. Fortunately, all of the nuts and bolts to source modules from codecommit are already there. We just need to make a few tweaks to the customizations folder and role used to execute the customizations CodeBuild Project.</p>
<h2>Terraform pre-api-helpers.sh</h2>
<p>To support codecommit GRC access, you need to install the git-remote-codecommit credential helper. This can be done using the pre-api-helper.sh script negating the need to modify the boilerplate codebuild project provided from Amazon.
</p><pre>#AWS overwrites the python VENV when executing the pre-api-helpers you first need
#to re-map it back to the one used when terraform executes
python3 -m venv $DEFAULT_PATH/aft-venv
#Install the codecommit helper to the terraform venv pip location
$DEFAULT_PATH/aft-venv/bin/pip install git-remote-codecommit
</pre>
<h2>Terraform module reference</h2>
<p>When setting up your repositories in codecommit for access as a module reference, it is recommended to use a convention so that you can specify that convention in the policy, avoiding the need to add each repository to the policy individually or granting access to all repositories. Note the use of the aft-management in the URI. This tells the credential helper to use the credential profile used to connect to the management account or aft management role when connecting to the repository. </p>
<pre>module "aft-module-mymodule" {
source = "git::codecommit::us-east-1://aft-management@aft-module-mymodule"
}
</pre>
<h2>IAM Policy Changes</h2>
<p>Next, we need to give AFT access to the module's repositories so they can be referenced as noted above, it is recommended to use a convention so any new modules you create are automatically granted in the IAM Policy. First, Locate the aft-customizations-role in IAM and add the following policy or amend the existing policy that allows access to the aft-account-customizations repository</p>
<pre>...
{
"Effect": "Allow",
"Action": [
"codecommit:GetBranch",
"codecommit:GetRepository",
"codecommit:GetCommit",
"codecommit:GitPull",
"codecommit:UploadArchive",
"codecommit:GetUploadArchiveStatus",
"codecommit:CancelUploadArchive"
],
"Resource": [
"arn:aws:codecommit:us-east-1:YOURACCOUNTNUMBERHERE:aft-*-customizations*",
"arn:aws:codecommit:us-east-1:YOURACCOUNTNUMBERHERE:aft-module-*"
]
}
...
</pre><p></p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4423400943969518521.post-28206561616450705272014-06-11T13:12:00.002-07:002014-06-11T15:37:05.374-07:00FOR XML PATH Pivot Columns as Key Value Pairs defeating the "must not come after a non-attribute-centric sibling in XML hierarchy in FOR XML PATH" error<p>Yesterday I had been working in SQL to create an xml document from a query I needed to make key value pairs out of my column list using one attribute as the key and the element content as the value. Pretty common pattern in XML right? I am aware of and have used the PIVOT key word however using it to describe what I am doing here seemed like the best choice of overloaded term. After several of failed attempts and some googling I threw the following query below into the management studio and finally got the output I had been looking for. The solution? simply add a '' between your elements telling sql server that you have a whitespace value between the two elements.</p>
<script src="https://gist.github.com/abstractlabs/350d6745da886dc8babf.js"></script>Anonymoushttp://www.blogger.com/profile/02923461543579951804noreply@blogger.com0tag:blogger.com,1999:blog-4423400943969518521.post-82779571964832679582013-11-04T20:41:00.001-08:002014-02-07T06:54:15.550-08:00Windows AppFabric Installation Error 1603<p>Recently I re-installed and upgraded my local workstation to Windows 8.1 shortly after I installed Windows App Fabric and received error 1603. I heart this error code it says so little and means so much. After looking at the detailed error logs I see that the Hosting Services and Cache Services failed to install. The log file in fun log file fashion references another log file that contains the output of the MSI package that actually failed. Within that log file you may see something that looks like this:</p>
<pre>
Get-Content : The term 'Get-Content' is not recognized as the name of a
Error: cmdlet, function, script file, or operable program. Check the spelling of the
</pre>
<p>Balderdash you say how could a CmdLet so core to powershell not be present during an installation running in administrative mode? The answer.. you for got to say Simon says or in other words <em>Set-ExecutionPolicy RemoteSigned</em>. In short set the remote execution policy of the powershell console you will be installing. Make sure to set it in both the 32bit and 64bit powershell console to be safe. It should also be noted that error 1603 can mean a variety of different things so make sure your logged conditions apply.</p>Anonymoushttp://www.blogger.com/profile/02923461543579951804noreply@blogger.com0tag:blogger.com,1999:blog-4423400943969518521.post-41798427332979610332013-06-14T18:40:00.000-07:002016-02-07T02:10:37.573-08:00Transaction was deadlocked on resources with another process and has been chosen as the deadlock victim on page and primary key using Microsoft Sql Server<h2>Background</h2>
<p>There are many possibilities for deadlocks to occur, this post specifically covers a condition I encountered where a table primary key and page lock on the same table created a deadlock error. I am not going to go into the specifics of <a href="http://blog.sqlauthority.com/2007/05/16/sql-server-fix-error-1205-transaction-process-id-was-deadlocked-on-resources-with-another-process-and-has-been-chosen-as-the-deadlock-victim-rerun-the-transaction/">troubleshooting deadlocks</a> chances are if you have made it here you already know about using SQL Server Profiler. Deadlocks occur when two pid's are trying to use each others locked object at the same time. The most common cause I have encountered for deadlocks is timing, the longer the transaction or operation the higher the risk that another concurrent operation will create a deadlock situation.</p>
<h2>Example</h2>
<p>I encountered the deadlock below when a new table in the system began to grow quickly. The table in question was used in frequent and concurrent read/write operations. This situation is probably one that is frequently encountered and the solution in my case turned out to be very simple. Figure A can be distilled to the following sql:
</p><div align="center"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbbW-NWGqivHuXPRaxxCnHyuVDBgAuT1pwvaz8vjOiXtafxgY2vajS8mapUFE9KKTFCXY8PhhyphenhyphenZ7lLgS0DfMRv1W__8WghFvj4jTiQfnHzjzQO6YWCyeS6jfkHj5Zyb9FbIwleA0m7WvM/s1600/Untitled.png"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgbbW-NWGqivHuXPRaxxCnHyuVDBgAuT1pwvaz8vjOiXtafxgY2vajS8mapUFE9KKTFCXY8PhhyphenhyphenZ7lLgS0DfMRv1W__8WghFvj4jTiQfnHzjzQO6YWCyeS6jfkHj5Zyb9FbIwleA0m7WvM/s512/Untitled.png"></a></div>
<pre class="brush: sql">CREATE TABLE A
(
Id int IDENTITY PRIMARY KEY
)
CREATE TABLE B
(
Id int IDENTITY PRIMARY KEY,
AId int FOREIGN KEY REFERENCES A(Id)
)
CREATE TABLE C
(
Id int IDENTITY PRIMARY KEY,
BId int FOREIGN KEY REFERENCES B(Id)
)
CREATE TABLE D
(
Id int IDENTITY PRIMARY KEY
AId int FOREIGN KEY REFERENCES A(Id),
CId int FOREIGN KEY REFERENCES C(Id),
)
//Insert some record into A,B,C,D ...//
//Left oval in figure A This is the deadlock victim//
DELETE B WHERE Id=1
//Right oval in figure B//
DELETE C WHERE Id=1
</pre>
<p></p>
<h2>Explanation</h2>
<p>You might have expected the statements above to contain a delete cascade option there aren't any. If you are looking for indexes on any there would only be the primary key created by SQL server. Which leads me to the root cause of this issue, table D grew to 500K+ rows which is about where the adventure began. As I mentioned earlier the common cause I have encountered... timing or bad timing in this case. The deletes in table B and table C were conflicting because none of the foreign keys had indexes on them making the constraint checks take longer leaving more time for deadlock to occur. After adding the following indexes the deadlocks were immediately resolved:
</p><pre class="brush: sql">CREATE INDEX IX_D_TO_A_FK ON D
(
AId
)
CREATE INDEX IX_D_TO_C_FK ON D
(
CId
)
</pre>
<p></p>Anonymoushttp://www.blogger.com/profile/02923461543579951804noreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-81260472617688748162013-06-08T07:17:00.000-07:002013-06-08T07:17:00.471-07:00Safari Webkit IOS6 and Brightness<h2>Apple putting the bright in brightness</h2>
<p><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwpQNiHCQm2EuOE0t5GEv8gSwUCrPz7zrsNle3nXQuhEJWgvV94NMlz_06WXK5Q7XzLDF9GZCkxpL0sUGVs6JykvZaE-VGcBfK2ds750i9_hIWd8nXWcaUpgSpUxhEL0u9BajG_c7TY4I/s320/Untitled.png" align="right" />While working on an HTML5 application I decided to get fancy and use the webkit brightness effect. This worked great in chrome and even firefox but when testing in Safari something wasn't quite right. Apparently apple decided to implement the brightness function in a little different than the <a href="http://www.w3.org/TR/2013/WD-filter-effects-20130523/#ltfilter-functiongt">W3C</a> specification. In Safari brightness is a scale -1 0 +1 where 0 is normal and -1 is completely black, the spec calls for 0 as black and 1 to be normal which.</p>
<pre class="brush:css">
-webkit-filter: brightness(0);
</pre>Anonymoushttp://www.blogger.com/profile/02923461543579951804noreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-90675263485685807192013-04-08T05:44:00.001-07:002014-03-05T13:24:28.587-08:00Windows Azure using git deployment with two different websites in the same repository<img src="http://images2.wikia.nocookie.net/__cb20121209233835/muppet/images/8/8c/1836e.jpg" align="right" width="280"/>
<p>Excited about the Windows Azure git deployment feature I decided to give it a try with <a href="https://bitbucket.org/">bitbucket</a>. Wow, talk about dead simple invisible magic deployment, I was very surprised at how easy it was. The only hangup is that the process is only designed to handle a single site deployment, <a href="https://github.com/projectkudu/kudu/wiki/Customizing-deployments">if you have more than one web project and/or solution</a> it's time to roll up your sleeves and read some dudes blog.</p>
<p>The git deployment uses a project called <a href="https://github.com/projectkudu">kudu</a> that executes when a deployment is triggered by the git repository service of your choice. The process runs a series of shell commands to script and execute the deployment. While looking for a solution I came across an <a href="http://blog.amitapple.com/post/38419111245/azurewebsitecustomdeploymentpart3">article</a> that explained how to deploy two sites one nodejs the other mvc4 using a conditional batch file. It is just as easy to use this same method for two or more sites deployments using app setting keys to negate any conditional logic, thus turning your deploy.cmd into one line of sweet deployment victory.</p>
<p>
Try it your self using the following steps:
<ol>
<li>In the <a href="https://manage.windowsazure.com/#Workspace/WebsiteExtension/websites">windows azure portal</a> select the website you want to include.</li>
<li>Select the configuration tab</li>
<li>Scroll Down to app settings</li>
<li>
Use the Key <em>APPLICATION</em> then enter the name of the application you will use this later
<img border="0" src="http://1.bp.blogspot.com/-2BlGI9_jhfI/UWKzWLAIM9I/AAAAAAAAFs8/JS4LQny9tlI/s480/Application-settings.png" />
</li>
<li>Repeat steps 1-4 for the 2nd website you will be deploying</li>
<li>On your local computer open a git prompt:
<pre class="brush:shell">npm install azure-cli -g</pre>
<strong>You are running windows aren't you, that's ok the git prompt is a little bashful.</strong>
</li>
<li>This is going to install all kinds of azure magic, go get beverage and come back or stay and watch the console light show</li>
<li>Now from your solution root:
<pre class="brush:shell">
azure site deploymentscript --aspWAP {Relative Path To Your 1st Project}.csproj -s {Relative Path To Your Solution}.sln
cp deploy.cmd {Application Name Used in 1st Azure App Settings}.cmd
azure site deploymentscript --aspWAP {Relative Path To Your 2nd Project}.csproj -s {Relative Path To Your Solution}.sln
cp deploy.cmd {Application Name Used in 2nd Azure App Settings}.cmd
</pre></li>
<li>Now edit your deploy.cmd to look like this:
<pre class="brush:shell">
@ECHO OFF
%APPLICATION%.cmd
</pre>
<strong>Yes.. it's that easy!</strong>
</li>
<li>Commit your changes</li>
<li>Check to see if they deployed in the azure portal deployments tab</>
</ol>
</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-54565409525863113552013-03-30T18:28:00.001-07:002013-09-08T11:46:41.193-07:00A simple cross browser JQuery console logging framework JLoggins<h1>Cross browser logging with JQuery</h1>
<p><img border="0" src="http://userserve-ak.last.fm/serve/252/29057491.png" align="right" />A couple of months ago I tried to find a simple logging framework for JQuery and the browser console. I wanted something simple that supported logging levels and could be easily dropped in to assist in debugging complicated call stacks. I came across some libraries and example code but, I wasn't finding exactly what I was looking for. As a result I decided to put together <a href="https://github.com/abstractlabs/jloggins">JLoggins</a>. I have had the code posted for several months on Google code but since <a href="http://plugins.jquery.com/jloggins/">JQuery</a> has finally standardized on Github I re-posted it.</p>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-73191482667473223972013-01-29T19:37:00.000-08:002013-01-29T19:44:01.901-08:00Deleting logs with PowerShell and Scheduled Tasks making the mundane a little less painful<img border="0" height="300" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw9MFMNwSkquvVlk-fgvyhiW8XF-dlWdFOx6W8WDrOVfVXp4PT0DgBDkmSk0PLcp7LzBZkZz1oijPJ0qYHDjXV7tTCbDbY6p5qKBfD89IKnkPY__BvxibmYK7WnMV61hcPK-bg8ic922mI/s400/delete-all-the-things.jpg" align="right" alt="Delete all the things." />
<p>This is a new take on an age old problem purging or rotating outdated logging information from servers and workstations alike. I am sure there are hundreds if not thousands of batch files and PowerShell scripts that delete old log files, but what if someone wrote one that not only deletes the old files but is also self aware. Ok.. Maybe not self aware and a far cry from <a href="http://en.wikipedia.org/wiki/Skynet_(Terminator)">Skynet</a>, but how about one that allows you to schedule and un-schedule it self using the provided parameters as a <a href="http://windows.microsoft.com/en-US/windows7/schedule-a-task">Windows Scheduled Task</a>.</p>
<p>Well look no further someone has; drop the snippet below into a file called Delete-Logs.ps1 and you too can <a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgw9MFMNwSkquvVlk-fgvyhiW8XF-dlWdFOx6W8WDrOVfVXp4PT0DgBDkmSk0PLcp7LzBZkZz1oijPJ0qYHDjXV7tTCbDbY6p5qKBfD89IKnkPY__BvxibmYK7WnMV61hcPK-bg8ic922mI/s400/delete-all-the-things.jpg" target="_blank">delete all the things</a> with ease. I have tested this script with Windows 7 and Server 2008 using PowerShell 2.0. While researching I discovered there are also some convenience <a href="http://technet.microsoft.com/en-us/library/jj649816.aspx">cmdlets for task scheduler</a> allowing you to manage tasks however, for compatibility reasons I decided stick with <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/bb736357(v=vs.85).aspx">schtasks.exe</a>. You will notice I had to remove the path characters from the task name :.(. I had to remove them because of a <a href="http://support.microsoft.com/kb/960608">bug limiting special characters from the scheduled task name</a>.
</p>
<pre class='brush:shell'>
# Parameters
# -Path A valid absolute path to a log folder or collection of folders script will recursively scan folders
# -Include A wildcard pattern used to include files for consideration
# -Days Number of days before deleting file from folder
# -Schedule Creates a scheduled task using the provided path, include, and days parameter. Task will run as system user and execute every night at 12:00 AM every
# number of days provided in days parameter
# -Unschedule Removes scheduled task that was previously configured for the provided path
# Examples
# Run as stand alone
# .\Delete-Logs.ps1 -Path:'D:\Logs' -Days:4
# Schedule Windows Task
# .\Delete-Logs.ps1 -Path:'C:\Temp' -Include:'*.tmp' -Days:7 -Schedule
# Unschedule Windows Task
# .\Delete-Logs.ps1 -Path:'C:\Temp' -Include:'*.tmp' -Days:7 -Unschedule
Param ($Path='C:\Logs',[string]$Include='*.log', [int]$Days=30, [switch]$Schedule,[switch]$Unschedule)
$Path = New-Object PSObject -Property:@{ Value = $Path; Name = $($Path -replace '\:\\|\\',' ')}
if($Schedule)
{
Invoke-Expression "C:\Windows\System32\schtasks.exe /create /tr `"`"$PSHome\powershell.exe`" -File '`"'$($MyInvocation.MyCommand.Path)'`"' -Path '`"'$($Path.Value)'`"' -Include '`"'$Include'`"' -Days $Days`" /tn `"Rotate $($Path.Name) every $Days days`" /sc daily /mo $Days /st 00:00 /ru SYSTEM"
}
elseif($Unschedule)
{
Invoke-Expression "C:\Windows\System32\schtasks.exe /delete /tn `"Rotate $($Path.Name) every $Days days`" /F"
}
else
{
$count = 0;
$start = Get-Date;
Write-Host "Begin log rotation of $($Path.Value) at $start"
Get-ChildItem $($Path.Value) -Recurse -Include:$Include | Where-Object {($_.CreationTime -le $(Get-Date).AddDays(-$Days))} | ForEach-Object {
$date = Get-Date;
$age = $date.ticks - $_.CreationTime.ticks;
$age = New-Object -TypeName:'TimeSpan' -ArgumentList:$age;
Write-Host "Rotating $_ at $date file was $age old";
Remove-Item $_ -Force;
$count++;
}
$end = Get-Date;
$duration = $end - $start
Write-Host "End log rotation of $($Path.Value) at $end $count file(s) were rotated in $duration"
}
</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-29804615630321317552013-01-01T17:59:00.003-08:002013-01-01T18:23:37.544-08:00Batch Converting WMA or WAV to MP3 using PowerShell<p>A family member of mine is a music collector and has had some trouble in the past managing his library. Initially when it was setup he had been using Windows Media Center to rip media in to his collection. Windows Media Center uses Windows Media Player under the hood and is set by default to encode using WMA format. We didn't notice this until several months later. Windows Media Player makes it easy to switch the encoding format but not so easy to re-encode the existing library.</p>
<p>I hate reinventing the wheel, so I went Googling for a batch conversion tool. I looked for a couple of hours installing several candidates. None seemed batch very well, and all but a couple of sourceforge projects were loaded with extra free version garbage. I stumbled on to <a href="http://ffmpeg.zeranoe.com/">ffmpeg</a> it is a very versatile command line based encoding tool. I must have been reading some bad or platforms specific examples because they all instructed the use of -acodec libmp3lame as an argument when encoding. This yielded an error indicating that the libmp3library was not a valid encoder. That bogged me down for a trying to find out why this seemingly ubiquitous library was missing or not loading properly. Turns out it was there all along I just needed to use -f mp3 and let the exe figure the rest out.</p>
<p>This is what I was able to put together with some help from a few other examples. I simply recurse through a directory structure and call the <a href="http://ffmpeg.zeranoe.com/">ffmpeg.exe</a> using the <a href="http://ss64.com/ps/invoke-expression.html">Invoke-Expression</a> cmdlet. The script example is targeted at .wma files, but it could easily be modified to convert any other format like .wav, .oog or any other <a href="http://ffmpeg.zeranoe.com/">ffmpeg</a> supported format. This setup seems to be working well enough; I may consider using something similar to normalize the file tagging with <a href="http://musicbrainz.org/doc/MusicBrainz_Picard">musicbrainz Picard</a>.</p>
<p>Make sure to backup the files you plan on changing before running any batch operation. <strong>This script deletes the original .wma file</strong> see the "Remove-Item" line. There is no error checking around the EXE call to make sure the conversion completed successfully and that the file is playable. Prior to building this script I staged my changes so I could play around to get the script right.</p>
<pre class='brush: shell'>
#Set the path to crawl
$path = 'C:\Documents and Settings\User\My Documents\My Music\Convert'
#The source or input file format
$from = '.wma'
#The encoding bit rate
$rate = '192k'
Get-ChildItem -Path:$path -Include:"*$from" -Recurse | ForEach-Object -Process: {
$file = $_.Name.Replace($_.Extension,'.mp3')
$input = $_.FullName
$output = $_.DirectoryName
$output = "$output\$file"
#-i Input file path
#-id3v2_version Force id3 version so windows can see id3 tags
#-f Format is MP3
#-ab Bit rate
#-ar Frequency
# Output file path
#-y Overwrite the destination file without confirmation
$arguments = "-i `"$input`" -id3v2_version 3 -f mp3 -ab $rate -ar 44100 `"$output`" -y"
$ffmpeg = ".'C:\Program Files\ffmpeg\bin\ffmpeg.exe'"
Invoke-Expression "$ffmpeg $arguments"
Write-Host "$file converted to $output"
#Delete the old file when finished
#This could use some error checking around it to prevent accidental deletion.
Remove-Item -Path:$_
}
</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-19891373026540909182012-10-17T17:56:00.000-07:002013-02-04T17:49:28.126-08:00Making your WCF WSDL more compatible, Flatten All The Imports <p>Recently, while building a SOAP and RESTful service using WCF and Routing I encountered the need to replace default WSDL imports with inline type definitions. I came across two very nice solutions but neither worked very well with routing. I am using 4.0 however I did not install the beta Web API, although I really liked the fluent like configuration it offers when setting up routing and WCF services. If you have the luxury of using .NET 4.5 this <a href="http://blogs.msdn.com/b/piyushjo/archive/2011/10/05/what-s-new-in-wcf-4-5-flat-wsdl-support.aspx" target="_blank">feature</a> is already built in and only requires you use a different query string to the wsdl parameter (singleWsdl). The first solution <a href="http://wcfextras.codeplex.com/" target="_blank">WCFExtra</a>, <a href="http://wcfextrasplus.codeplex.com/" target="_blank">WCFExtras+</a>, WCFExtras 2.0 super deluxe edition.. Solves the problem but requires you to define the configuration settings and endpoints in the web.config. Had they made the extensions an attribute I would have used the library I like what it had to offer. The <a href="http://blogs.msdn.com/b/dotnetinterop/archive/2008/09/23/flatten-your-wsdl-with-this-custom-servicehost-for-wcf.aspx" target="_blank">second solution</a> gets you 90% there but requires you use a custom factory to wrap the default one, I am not a big fan of creating a factory just to flatten the WSDL. The code looks like it has been copied, pasted and re-blogged a few times but I wanted to add the ability to apply it as an attribute to a service instead messing with configuration settings</p>
<h3>Endpoint Behavior Attribute Declared</h3>
<pre class="brush:c-sharp">
[SingleWsdl]
public class MyService : IMyService
{
public MyMessage MyOperation()
{
...
}
}
</pre>
<h3>Endpoint Behavior Attribute</h3>
<pre class="brush:c-sharp">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ServiceModel.Description;
using System.Xml.Serialization;
using System.Xml.Schema;
using System.Collections;
using ServiceDescription = System.Web.Services.Description.ServiceDescription;
public class SingleWsdlAttribute : Attribute, IServiceBehavior, IEndpointBehavior, IWsdlExportExtension
{
#region Fields
#endregion
#region Properties
#endregion
#region Construction
#endregion
#region Methods
private void Resolve(XmlSchema schema, XmlSchemaSet set, List<XmlSchema> imports)
{
foreach (XmlSchemaImport import in schema.Includes)
{
foreach (XmlSchema ixsd in set.Schemas(import.Namespace))
{
if (!imports.Contains(ixsd))
{
imports.Add(ixsd);
Resolve(ixsd, set, imports);
}
}
}
}
private void Merge(XmlSchema schema, XmlSchemas destination)
{
for (int i = 0; i > schema.Includes.Count; i++) if (schema.Includes[i] is XmlSchemaImport) schema.Includes.RemoveAt(i--);
destination.Add(schema);
}
#endregion
#region IWsdlExportExtension Members
void IWsdlExportExtension.ExportContract(WsdlExporter exporter, WsdlContractConversionContext context) { }
void IWsdlExportExtension.ExportEndpoint(WsdlExporter exporter, WsdlEndpointConversionContext context)
{
XmlSchemaSet set = exporter.GeneratedXmlSchemas;
foreach (ServiceDescription description in exporter.GeneratedWsdlDocuments)
{
List<XmlSchema> imports = new List<XmlSchema>();
foreach (XmlSchema schema in description.Types.Schemas) Resolve(schema, set, imports);
description.Types.Schemas.Clear();
foreach (XmlSchema schema in imports) Merge(schema, description.Types.Schemas);
}
}
#endregion
#region IServiceBehavior Members
void IServiceBehavior.AddBindingParameters(System.ServiceModel.Description.ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase, System.Collections.ObjectModel.Collection<ServiceEndpoint> endpoints, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { }
void IServiceBehavior.ApplyDispatchBehavior(System.ServiceModel.Description.ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase)
{
serviceDescription.Endpoints.Where(p => !p.Behaviors.Contains(this)).ForEach(a => a.Behaviors.Add(this));
}
void IServiceBehavior.Validate(System.ServiceModel.Description.ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase) { }
#endregion
#region IEndpointBehavior Members
void IEndpointBehavior.AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters) { }
void IEndpointBehavior.ApplyClientBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime) { }
void IEndpointBehavior.ApplyDispatchBehavior(ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher) { }
void IEndpointBehavior.Validate(ServiceEndpoint endpoint) { }
#endregion
}
//Bonus content.. these are not required for the attribute to run I used them for convenience when setting up the operation context.
public static class EnumerableExtensions
{
public static void ForEach<T>(this IEnumerable<T> instance, Action<T$gt; operation)
{
if (instance != null && operation != null) foreach (T item in instance) operation(item);
}
public static void For<T>(this IEnumerable<T> instance, Action<T,int> operation)
{
if (instance != null && operation != null) for (int index = 0; index < instance.Count(); index++) operation(instance.ElementAt(index), index);
}
}
</pre>
<h3>Before Single WSDL Behavior Attribute</h3>
<pre class="brush:xml">
...
<wsdl:types>
<xsd:schema targetNamespace="http://tempuri.org">
<xsd:import schemaLocation="http://tempuri.org/myservice?xsd=xsd1" namespace="http://tempuri.org/"/>
<xsd:import schemaLocation="http://tempuri.org/myservice?xsd=xsd0" namespace="http://schemas.microsoft.com/2003/10/Serialization/"/>
</xsd:schema>
</wsdl:types>
...
</pre>
<h3>After Single WSDL Behavior Attribute</h3>
<pre class="brush:xml">
...
<wsdl:types>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://schemas.microsoft.com/2003/10/Serialization/" attributeFormDefault="qualified" elementFormDefault="qualified" targetNamespace="http://schemas.microsoft.com/2003/10/Serialization/">
<xs:element name="anyType" nillable="true" type="xs:anyType"/>
<xs:element name="anyURI" nillable="true" type="xs:anyURI"/>
<xs:element name="base64Binary" nillable="true" type="xs:base64Binary"/>
<xs:element name="boolean" nillable="true" type="xs:boolean"/>
<xs:element name="byte" nillable="true" type="xs:byte"/>
<xs:element name="dateTime" nillable="true" type="xs:dateTime"/>
<xs:element name="decimal" nillable="true" type="xs:decimal"/>
<xs:element name="double" nillable="true" type="xs:double"/>
<xs:element name="float" nillable="true" type="xs:float"/>
<xs:element name="int" nillable="true" type="xs:int"/>
<xs:element name="long" nillable="true" type="xs:long"/>
<xs:element name="QName" nillable="true" type="xs:QName"/>
<xs:element name="short" nillable="true" type="xs:short"/>
<xs:element name="string" nillable="true" type="xs:string"/>
<xs:element name="unsignedByte" nillable="true" type="xs:unsignedByte"/>
<xs:element name="unsignedInt" nillable="true" type="xs:unsignedInt"/>
<xs:element name="unsignedLong" nillable="true" type="xs:unsignedLong"/>
<xs:element name="unsignedShort" nillable="true" type="xs:unsignedShort"/>
<xs:element name="char" nillable="true" type="tns:char"/>
<xs:simpleType name="char">
<xs:restriction base="xs:int"/>
</xs:simpleType>
<xs:element name="duration" nillable="true" type="tns:duration"/>
<xs:simpleType name="duration">
<xs:restriction base="xs:duration">
<xs:pattern value="\-?P(\d*D)?(T(\d*H)?(\d*M)?(\d*(\.\d*)?S)?)?"/>
<xs:minInclusive value="-P10675199DT2H48M5.4775808S"/>
<xs:maxInclusive value="P10675199DT2H48M5.4775807S"/>
</xs:restriction>
</xs:simpleType>
<xs:element name="guid" nillable="true" type="tns:guid"/>
<xs:simpleType name="guid">
<xs:restriction base="xs:string">
<xs:pattern value="[\da-fA-F]{8}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{4}-[\da-fA-F]{12}"/>
</xs:restriction>
</xs:simpleType>
<xs:attribute name="FactoryType" type="xs:QName"/>
<xs:attribute name="Id" type="xs:ID"/>
<xs:attribute name="Ref" type="xs:IDREF"/>
</xs:schema>
</wsdl:types>
...
</pre>
<h2>Update</h2>
<h3>ServiceDescription Imports element is showing</h3>
<p>Your ServiceDescription Imports element is showing, why? I came recently came across this a number of months after posting this. If you have binged or googled not much is documented on this little guy. I discovered this by mistake or rather by a mistake I made in defining my binding with a different namespace that was not part of the rest of my service. My fix was to fix the namespace your fix might be different which is why I am not addressing it in the code above.</p>
<h3>An example of what might cause a ServiceDescription imports element</h3>
<pre class="brush:c-sharp">
Description.Endpoints[0].Binding[0] = new BasicHttpBinding(BasicHttpSecurityMode.None)
{
Name = "vader",
//This property does not match service namespace
Namespace = "http://nooooooooooooooo.com",
AllowCookies = false
};</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-14741281245878819762012-07-08T16:07:00.000-07:002015-03-27T21:16:42.110-07:00Static is as static does, initializing ThreadStatic variables<p>I found the need to create some thread static variables using the ThreadStaticAttribute today in C#. In a hurry, I figured I would initialize them with a value.</p>
<pre class="brush: c-sharp">
[ThreadStatic]
private static string __host = "localhost";
[ThreadStatic]
private static int __port = 30000;
</pre>
<p>On the second thread call to the class I found that __host and __port were default values again. Curious I decided to take a look at the compiler generated code and found:
</p>
<pre class="brush: c-sharp">
static MyObject()
{
__host = "localhost";
__port = 0x7530;
}
</pre>
<p>Why? The attribute is telling the runtime how share the variable and the language is telling the compiler how to create the IL.</p>
<p>Easy so initialize them in the instance level constructor right? No. That would overwrite the thread static variable every time the class is created. Best solution I found was to use nullable values and GetValueOrDefault.</p>
<pre class="brush: c-sharp">
private const string DefaultHost = "localhost";
private const int DefaultPort = 30000;
[ThreadStatic]
private static string __host;
[ThreadStatic]
private static int? __port;
//...//
public string Host
{
get
{
return String.IsNullOrEmpty(__host) ? DefaultHost : __host;
}
set
{
__host = value;
}
}
public int Port
{
get
{
return __port.GetValueOrDefault(DefaultPort);
}
set
{
__port = value;
}
}
</pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-71065051093728945522012-06-29T18:30:00.000-07:002013-10-19T21:22:06.833-07:00P: is for Path and is less than 260 characters<div class="brush: plain">
TF10128: The path <path> contains more than the allowed 260 characters. Type or select a shorter path.
</div>
<p>I have run into this issue a number of times over the past year for some reason engineers just want to name a project, class or solution what it is without having to think about string lengths and file systems. I like to keep my projects folder in my user profile because that's where everything else is... neat and tidy. Thankfully a solution, something that hearkens back to the of flannel, blue jeans and birkenstock sandals with socks possibly where this limitation all started.</p>
<p>After solving this now for the 4th time the following works the best. Take the code below paste it into notepad and change the path to your source location. Remember to keep two \\ slashes for every one in the reg file or it wont work. <strong>You will have to reboot your computer for the change to take effect.</strong></p>
<pre class="brus: bash">
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices]
"P:"="\\??\\D:\\Path\\To\\Your\\Code"
</pre>
<del>
<p>To solve the problem I simply add a batch file to my start up folder that creates a drive substitution.</p>
<pre class="brush: bash">
@ECHO OFF
SUBST P: "%USERPROFILE%\My Projects"
</pre>
<p>How the characters limitation is calculated:</p>
<pre clsss="brush: plain">
Path length + 1 (separator) +
Solution name length + 1 (separator) +
Project name length + 1 (separator) +
Project name length +
80 (Reserved space)
</pre>
<p>
Dude where is my admin?
<ol>
<li><del>Create a shortcut to the batch/cmd file</del></li>
<li><del>Right click on properties</del></li>
<lI><del>Replace the Target with: %windir%\System32\cmd.exe /c "Map Projects.cmd"</del></li>
<li><del>Click "Advanced..."</del></li>
<lI><del>Check "Run as administrator"</del></li>
</ol>
This method works more reliably and allows for drive substitution on the system user (Debugging windows Service)
<ol>
<li>Create a scheduled task that runs on login of your user account or system startup</li>
<li>Set the task to run under the user you want P for</li>
<li>Enter SUBST as the program name</li>
<li><strong>Instead of %USERPROFILE% use the full path</strong></li>
<li>Enter P: "C:\Users\{Your Account Name Here}\My Projects" in the arguments box</li>
<li>Run the task manually if you get a 0 result you have successfully configured P</li>
</ol>
</p>
</del>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-81515041316117906492012-05-01T10:12:00.000-07:002012-05-02T19:04:24.456-07:00Unity, Injection and the surprise InvalidCastException of System.TypeChances are if you have run into this message you might be trying to inject your constructor, property or method with a type as a type. <em>The reason?</em> Default behavior of unity is to resolve a type to an instance and inject it for you. This feature applies to all injection and is not terribly clear in the Unity documentation. It also comes up quite frequently on the <a href="http://stackoverflow.com/questions/tagged/unity" target="_blank">stackoverflow</a> unity channel.<br />
An example of the full error message in all it's glory:<br />
<pre class="brush: plain">
Resolution of the dependency failed, type = \"MyNamespace.IMyInteface\", name = \"(none)\".\r\nException occurred while:
Resolving parameter \"myConstructorParameter\" of constructor MyNamespace.MyClass.Logger(System.Type myConstructorParameter).
Exception is: InvalidCastException - Unable to cast object of type 'MyActualInstanceOfTypeProvidedToConstructor' to type 'System.Type'.
-----------------------------------------------
At the time of the exception, the container was:
Resolving MyNamespace.MyClass,(none) (mapped from MyNamespace.IMyInterface, (none))\r\n Resolving parameter \"myConstructorParameter\" of constructor MyClass(System.Type myConstructorParameter)
</pre>
Solving this issue is not as complicated as the error message may seem... for unity. Microsoft did left some flexibility for us, it simply involves wrapping the type argument as an injection parameter:<br />
<pre class="brush: csharp">
new ParameterOverride("typeParameterArgument",new InjectionParameter(typeof(Type),typeof(MyNamespace.MyClassUsedAsInjectedTypeArgument)))
</pre>
<em>Why?</em> if you dive into the reflection tool of your choice you will eventually come across "InjectionParameterValue.ToParameter" this little gem will show you that unity is checking to see of the value is an InjectionParameterValue if it is no and the value is a type return a new ResolveParameter using the argument as a type.<br />
References:
<ul>
<li>
<a href="http://stackoverflow.com/questions/5600488/parameteroverride-failing-on-system-type-parameter">ParameterOverride failing on System.Type parameter</a></li>
<li><a href="http://kozmic.pl/2008/12/03/unity-framework-and-the-principle-of-the-least-surprise/" target="_blank">http://kozmic.pl/2008/12/03/unity-framework-and-the-principle-of-the-least-surprise/</a></li>
</ul>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-88616014219567095102012-02-15T20:44:00.000-08:002012-04-27T08:23:56.296-07:00MVC3 controller action model argument is nullAfter re-naming some controller action arguments I recently discovered a method that was previously working, unexpectedly returning null for the controller action argument. The action used the default model binder pulling data from both the query string and route values. Consider the following code:<br />
<br />
<pre class="brush: csharp">
public class Search
{
public string Name {get;set;}
public string Query {get;set;}
}
public ActionResult Results(Search query)
{
....
}
....
///results/{Name}
....
///results/monkeys?Query=12-of-them-jumping</pre>
<br />
<br />The default model binder rather than binding to the object is trying to take your "Query" and plug it into the action method as an argument. Since string is not "Search" it results in a null value, the short lesson to take from this puzzle is to avoid naming properties on your object the same name as arguments you are passing into the controller action.Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-40738570545294332152011-12-15T14:57:00.001-08:002015-03-27T21:48:41.053-07:00TSQL SELECT Convert or Cast DateTime but WHERE fails<div>
Can a select return when a where fails?<br />
<div>
<br />
<div>
This zen like question came up this afternoon while digging through some rather raw varchar table data. The answer is yes, and part of answer is coming up with the right question. While googling around I ended up settling on <a href="http://stackoverflow.com/questions/7263501/conversion-to-datetime-fails-only-on-where-clause">"CONVERT fails in where clause but not in select"</a>.</div>
<br />
Consider the following problem:</div>
<pre class="brush: sql">
SELECT * FROM dbo.MyTable
WHERE ISDATE(value)=1 AND AND CAST(Value AS datetime) > GETUTCDATE()
--OR
SELECT * FROM dbo.MyTable
WHERE ISDATE(value)=1 AND CONVERT(datetime,Value) > GETUTCDATE()
Value
-----------------------
2012-01-02 00:00:00.000
--BUT
SELECT CONVERT(datetime,Value) FROM dbo.MyTable
WHERE ISDATE(value)=1
--OR
SELECT CAST(Value AS datetime) FROM dbo.MyTable
WHERE ISDATE(value)=1
Msg 241, Level 16, State 1, Line 1
--HUH?
Conversion failed when converting date and/or time from character string.
</pre>
<br />
<div>
To paraphrase the above, SQL Server may evaluate rows outside of the expected WHERE clause based on how the optimizer decides to limit the result set. This left me with three solutions:<br />
<ol><br />
<li>Create an index to persuade the optimizer to avoid the plan that evaluates non-date columns... perhaps not.</li>
<br />
<li>Reload cast or converted data into a #temporary table, yes this will work but really?</li>
<br />
<li>My solution below compliments of the path of least resistance, add some case logic around the value column</li>
</ol>
</div>
<pre class="brush: sql">
SELECT Value FROM dbo.MyTable
WHERE CASE ISDATE(value)=1 THEN CONVERT(datetime,Value)
ELSE NULL END > GETUTCDATE()
--OIC ~(:o)</pre>
<br />
<div>
<br />
When you think about how SQL Server has to discover the table data it makes sense. If you haven't run into the condition before it may cause a little head scratching. The better solution would be to use a date time column in the first place if possible, but hopefully with this post and the corresponding <a href="http://stackoverflow.com/questions/7263501/conversion-to-datetime-fails-only-on-where-clause">stackoverflow</a> post you can save a few extra hairs on your head.</div>
<br /></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-7695509958263750722011-09-20T13:05:00.000-07:002012-05-02T19:05:03.197-07:00Azure Security Certificate Names Are Case Sensitive<div>
If you receive the following error and are not using a proxy, and do not have Fiddler open, you might have some mixed case somewhere in your certificate name configuration settings:</div>
<div>
<br /></div>
<div>
1:55:36 PM - Warning: There are package validation warnings.</div>
<div>
1:55:36 PM - Preparing...</div>
<div>
1:55:36 PM - Connecting...</div>
<div>
1:55:38 PM - Uploading...</div>
<div>
1:56:25 PM - Creating...</div>
<div>
1:56:42 PM - HTTP Status Code: 500/nError Message: The server encountered an internal error. Please retry the request./nOperation Id: {GUID}</div>
<div>
1:56:43 PM - Deleting Quality Assurance Debug Only - Website</div>
<div>
1:56:44 PM - There was no endpoint listening at https://management.core.windows.net/{GUID}/services/hostedservices/website/deploymentslots/Staging that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details.</div>
<div>
1:56:44 PM - Deployment failed</div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-27199681736347837202011-09-09T12:16:00.001-07:002012-05-02T19:05:25.112-07:00Windows Azure Multi-Site Single Role Using Host Headers With Config TransformsI was excited to find out that Azure supported multiple sites with a single worker role with a few simple configuration changes thanks to <a href="http://www.wadewegner.com/2011/02/running-multiple-websites-in-a-windows-azure-web-role/">Wade Wegner's Example</a>. I was a little disappointed that the process of building the package did not include the XDT transforms on the config. Neither is the website packaged for deployment which strips out all non essential files references in the web application before deploying. After a number of google searches and some trial and error I came up with the following solution:<div><br /><br />In my example I created a single website and associated it to my azure project as the only worker role.<br /><br />In the worker role pre-build event I have placed the following for each website that needed to be packaged and transformed:<br /><br /><strong>Cloud Project Pre Build Event</strong><br /><pre class="brush: shell"><br />rmdir ..\Deploy.Cloud\Website.Mvc /S /Q<br />"%systemroot%\Microsoft.NET\Framework64\v4.0.30319\MSBuild.exe" "$(ProjectDir)..\Website.Mvc\Website.Mvc.csproj" "/p:Platform=AnyCPU;PackageAsSingleFile=False;Configuration=$(ConfigurationName);DesktopBuildPackageLocation=..\Package\Website.Mvc;PackageAsSingleFile=False;IntermediateOutputPath=..\Deploy.Cloud\Website.Mvc\\" /t:Package<br /></pre><br /><em>*You must remove the intermediate location before it creates the package otherwise debug/release configurations will not differ</em></div><div><em><br /></em></div><div><em>*Note the extra slash at the end yes that is intentional if you do not add it you will get some interesting error messages about properties with no value</em><br /><br /><strong>ServiceDefinition.csdef</strong><br /><pre class="brush: shell"><br /><Site name="Website.Mvc" physicalDirectory="Website.Mvc\Package\PackageTmp"><br /></pre><br /><br /><strong>What's going on here</strong><br /><div>Every time the cloud project builds it will copy and package the website in the cloud project's project root folder as the website name. Next step is to update the ServiceDefinition.csdef use the intermediate path location for the physicalDirectory. Why? because the package directory contains all of the assets to be deployed as a web deployment project which is not needed when deploying using a web role.</div></div><br /><br /><em>References</em><br /><div><br /><a href="http://www.wadewegner.com/2011/02/running-multiple-websites-in-a-windows-azure-web-role/">http://www.wadewegner.com/2011/02/running-multiple-websites-in-a-windows-azure-web-role/</a><br /><br /><a href="http://www.digitallycreated.net/Blog/59/locally-publishing-a-vs2010-asp.net-web-application-using-msbuild">http://www.digitallycreated.net/Blog/59/locally-publishing-a-vs2010-asp.net-web-application-using-msbuild</a><br /><br /><a href="http://codingcockerel.co.uk/2008/05/18/how-to-publish-a-web-site-with-msbuild/">http://codingcockerel.co.uk/2008/05/18/how-to-publish-a-web-site-with-msbuild/</a><br /></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-34517862308126483472011-08-17T10:42:00.000-07:002013-01-01T18:28:36.595-08:00Windows Azure Accelerator for Web Roles Missing Trace Logging By DefaultAfter upgrading our applications to Windows Azure Accelerator for Web Roles I found that the trace messages disappeared. Windows Azure Accelerator for Web Roles default project does not enable trace logging by default. To fix this simply add the following lines to the WebRole.cs of the Windows Azure Accelerator for Web Role project.<br /><pre class="brush: csharp"><br />private static void ConfigureDiagnosticsMonitor()<br />{<br />//... Omitted from example<br /> // Trace Logs<br /> diagnosticMonitorConfiguration.Logs.ScheduledTransferLogLevelFilter = LogLevel.Undefined;<br /> diagnosticMonitorConfiguration.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);<br />//... Omitted from example<br />}<br /></pre>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-33293887729260645232011-02-08T07:46:00.000-08:002012-05-02T19:06:16.560-07:00Using XDT with Worker Role App.configs<div>Recently I wrote a worker role in windows azure and was hoping the app.config would transform much like the web.config in the web roles. Sadly this is not the case. The configuration transforms are very useful when making a distinction between production, staging and development environments. After a little research, I did discover how to roll my own hopefully Microsoft will extend this feature to the rest of the azure project types. </div><br /><div>Like the web roles this transform is only executed when a deploy is executed, here is how it was done:</div><div><div><br /></div><div><ol><li>Un-load the project containing your worker role and app.config</li><li>Edit the project file at the bottom add the following lines: <pre class="brush: xml"><br /><usingtask taskname="TransformXml" assemblyfile="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.Tasks.dll"><br /><target name="AfterCompile" condition="exists('app.$(Configuration).config')"><br /><transformxml source="app.config" destination="$(IntermediateOutputPath)$(TargetFileName).config" transform="app.$(Configuration).config"><br /></transformxml></target><br /></usingtask><br /></pre><br /></li><br /><li>Add the following Item Group:<pre class="brush: xml"><br /><ItemGroup><br /><Content Include="App.config" /><br /><Content Include="App.Debug.Config"><br /> <DependentUpon>App.Config</DependentUpon><br /> <SubType>Designer</SubType><br /></Content><br /><Content Include="App.Release.Config"><br /> <DependentUpon>App.Config</DependentUpon><br /></Content><br /></ItemGroup></pre></li><li>Now add the new configuration files to the project App.Debug.config, App.Release.config</li><li>Reload the worker role project project</li><li>Unload the cloud service project that references the worker role</li><li>Open the file for edit and add the following line:<pre class="brush: xml"><br /><Target Name="CopyWorkerRoleConfigurations" BeforeTargets="AfterPackageComputeService"><br /> <Copy SourceFiles="..\{<b>PROJECT PATH</b>}\obj\$(Configuration)\{<b>ASSEMBLY NAME</b>}.dll.config" DestinationFolder="$(IntermediateOutputPath)\{<b>PROJECT NAME</b>}" OverwriteReadOnlyFiles="true"/><br /></Target><br /></pre></li><br /><li>Reload the project file</li></ol><div><br /></div>You will need to use the name of your project and the assembly name in the locations above. The paths are relative to the azure service project file. If you get lost you can add an <error> element to the target and set the text to your path.<br /><ol><br /></ol></error></div></div>Unknownnoreply@blogger.comtag:blogger.com,1999:blog-4423400943969518521.post-37940267691329483792010-09-17T17:35:00.000-07:002010-09-17T17:38:27.860-07:00PowerShell YammerWhat I had learned in my <a href="http://blog.abstractlabs.net/2010/09/powershell-openauthentication-and.html">last post</a> I built a small PowerShell snap-in to integrate with the <a href="https://www.yammer.com">Yammer</a> API which is better suited for companies. I have started a second <a href="http://psyammer.codeplex.com/">CodePlex</a> project specifically for the snap-in.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4423400943969518521.post-45285698326328601462010-09-03T14:32:00.000-07:002010-09-03T14:44:48.146-07:00PowerShell OpenAuthentication and TwitterTwitter recently depreciated support of Basic Authentication which has made it a little more tricky to post updates via the API. I decided a Cmdlet seemed like the best route to take since the new method required storage of state for the oAuth token. I started a <a href="http://pstwitter.codeplex.com/">CodePlex</a> site for those of you who might also need the functionality or would like to contribute.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4423400943969518521.post-91970844036784605492010-08-20T21:15:00.001-07:002010-08-20T21:26:07.664-07:00Where Am I.cmdAfter re-installing I have yet again found my static DHCP lease is no longer holding my IP address. I am at home and can't think of another way to find the IP, pinging my machine name continues to resolve to the static lease. If you find your self in the same boat and want to scan the network for IP to name resolution the following script should do the trick:<br /><pre><br />@ECHO OFF<br />FOR /L %%i IN (1,1,254) DO (<br />ECHO 10.1.1.%%i<br />nbtstat -A 10.1.1.%%i<br />)<br /></pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-4423400943969518521.post-8909078673253274252010-08-17T21:49:00.000-07:002010-08-17T21:59:28.042-07:00VSTS 2010 MsBuild Extension Pack TfsSource and Win32Exception: The system cannot find the file specifiedI came across this error while trying to check out a file using MsBuild under TFS2010. The error message is misleading because your first inclination is to check the ItemPath or WorkingDirectory in the task. Had I not come across this forum post I may have been stuck for a bit <a href="http://msbuildextensionpack.codeplex.com/Thread/View.aspx?ThreadId=74209">http://msbuildextensionpack.codeplex.com/Thread/View.aspx?ThreadId=74209</a>. The current version of the Extension Pack has some hard coded paths in the task that tell it where to find the tf.exe file. After taking a look at the source code I found if you just add the following path to your Path environment variable the task will function properly.<br /><pre><br />C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\<br /></pre>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-4423400943969518521.post-4191668093873774362010-08-17T17:42:00.000-07:002010-08-17T20:25:17.819-07:00Where's the Build Number, using MSBuild with VSTS 2010I wanted to key the our assembly versions using the VSTS build number I quickly found the solution is not as simple as it seems. At first I thought this property would be a first class member of the IBuildDetail class making it easily accessible in the VSTS workflow but, it's not. After popping open the UpdateBuildNumberActivity I found the build number is actually extracted from the IBuildDetail.Uri. Use the format below in your MsBuild task to gain access to the BuildID and the SourceGetVersion. While the SourceGetVersion may return a sequential change set number (C123) it is not guaranteed to be numeric so it's probably not a good fit for the revision part of your version number. You may consider adding it to your AssemblyConfiguration attribute or AssemblyInformationalVersion attribute. Connecting the change set or source version to the assemblies generated may prove to be useful in troubleshooting version related issues later on.<br /><pre><br />String.Format("/p:SkipInvalidConfigurations=true /p:BuildID=""{1}"" /p:SourceGetVersion=""{2}"" {0}", MSBuildArguments, LinkingUtilities.DecodeUri(BuildDetail.Uri.ToString()).ToolSpecificId, BuildDetail.SourceGetVersion)<br /></pre>Unknownnoreply@blogger.com0