Thursday, June 15, 2023

Provisioning Cross-Account Dependencies in Amazon Account Factory for Terraform

Before we dive into setting up an additional provider it might help to go over a little of the project structure for Amazon Account Factory for Terraform (AFT). When you create a new customization script. When you copy the template directory in your project you will notice in your terraform folder two jinja files these files act as templates that work with the AFT pipeline to properly configure your terraform script for execution. By default these files come pre-configured with one provider in the template this provider will become the provider to your target account when executing the pipeline. jinja uses {{ variable }} to replace the values in the file to those ambient in the execution pipeline. So what if you need to configure a cross-account dependency between your account and another account?

Configuring a 2nd provider is pretty straight forward simply copy the provider block and paste it into the template file and replace the role_arn with a role to the secondary account. If using AFT in other accounts you could assume the /AWSAFTExecution role in the secondary account to gain administative access for the terraform provider and then set your alias in the second block to something you can reference in your script. That's it now you can apply changes to multiple accounts via the customization script

provider "aws" {
  region = "us-east-1"
  alias  = "*YOURALIASHERE*"
  assume_role {
    role_arn    = "arn:aws:iam::*YOURACCOUNTHERE*:role/AWSAFTExecution"
  }
  default_tags {
    tags = {
      managed_by                  = "AFT"
    }
  }
}

Wednesday, April 19, 2023

Using Terraform Remote Modules Sourced From CodeCommit using HTTPS (GRC) with AWS Account Factory (AFT) Workflow

AWS Account Factory is a robust templating framework allowing users to apply account configuration using terraform uniformly across an organization utilizing multiple accounts and consolidated billing. Currently, there is a limitation preventing the inclusion of codecommit modules in the customization terraform script, which makes it difficult to compartmentalize the account customization scripts. Amazon recommends using HTTPS (GRC) to access CodeCommit; this method allows you to use AWS credentials to access the repository, negating the need for usernames, passwords, and keys. Fortunately, all of the nuts and bolts to source modules from codecommit are already there. We just need to make a few tweaks to the customizations folder and role used to execute the customizations CodeBuild Project.

Terraform pre-api-helpers.sh

To support codecommit GRC access, you need to install the git-remote-codecommit credential helper. This can be done using the pre-api-helper.sh script negating the need to modify the boilerplate codebuild project provided from Amazon.

#AWS overwrites the python VENV when executing the pre-api-helpers you first need 
#to re-map it back to the one used when terraform executes 
python3 -m venv $DEFAULT_PATH/aft-venv
#Install the codecommit helper to the terraform venv pip location
$DEFAULT_PATH/aft-venv/bin/pip install git-remote-codecommit

Terraform module reference

When setting up your repositories in codecommit for access as a module reference, it is recommended to use a convention so that you can specify that convention in the policy, avoiding the need to add each repository to the policy individually or granting access to all repositories. Note the use of the aft-management in the URI. This tells the credential helper to use the credential profile used to connect to the management account or aft management role when connecting to the repository.

module "aft-module-mymodule" {
  source = "git::codecommit::us-east-1://aft-management@aft-module-mymodule"
}

IAM Policy Changes

Next, we need to give AFT access to the module's repositories so they can be referenced as noted above, it is recommended to use a convention so any new modules you create are automatically granted in the IAM Policy. First, Locate the aft-customizations-role in IAM and add the following policy or amend the existing policy that allows access to the aft-account-customizations repository

...
          {
            "Effect": "Allow",
            "Action": [
                "codecommit:GetBranch",
                "codecommit:GetRepository",
                "codecommit:GetCommit",
                "codecommit:GitPull",
                "codecommit:UploadArchive",
                "codecommit:GetUploadArchiveStatus",
                "codecommit:CancelUploadArchive"
            ],
            "Resource": [
                "arn:aws:codecommit:us-east-1:YOURACCOUNTNUMBERHERE:aft-*-customizations*",
                "arn:aws:codecommit:us-east-1:YOURACCOUNTNUMBERHERE:aft-module-*"
            ]
        }
...

Wednesday, June 11, 2014

FOR XML PATH Pivot Columns as Key Value Pairs defeating the "must not come after a non-attribute-centric sibling in XML hierarchy in FOR XML PATH" error

Yesterday I had been working in SQL to create an xml document from a query I needed to make key value pairs out of my column list using one attribute as the key and the element content as the value. Pretty common pattern in XML right? I am aware of and have used the PIVOT key word however using it to describe what I am doing here seemed like the best choice of overloaded term. After several of failed attempts and some googling I threw the following query below into the management studio and finally got the output I had been looking for. The solution? simply add a '' between your elements telling sql server that you have a whitespace value between the two elements.

Monday, November 4, 2013

Windows AppFabric Installation Error 1603

Recently I re-installed and upgraded my local workstation to Windows 8.1 shortly after I installed Windows App Fabric and received error 1603. I heart this error code it says so little and means so much. After looking at the detailed error logs I see that the Hosting Services and Cache Services failed to install. The log file in fun log file fashion references another log file that contains the output of the MSI package that actually failed. Within that log file you may see something that looks like this:

Get-Content : The term 'Get-Content' is not recognized as the name of a 
Error: cmdlet, function, script file, or operable program. Check the spelling of the 

Balderdash you say how could a CmdLet so core to powershell not be present during an installation running in administrative mode? The answer.. you for got to say Simon says or in other words Set-ExecutionPolicy RemoteSigned. In short set the remote execution policy of the powershell console you will be installing. Make sure to set it in both the 32bit and 64bit powershell console to be safe. It should also be noted that error 1603 can mean a variety of different things so make sure your logged conditions apply.

Friday, June 14, 2013

Transaction was deadlocked on resources with another process and has been chosen as the deadlock victim on page and primary key using Microsoft Sql Server

Background

There are many possibilities for deadlocks to occur, this post specifically covers a condition I encountered where a table primary key and page lock on the same table created a deadlock error. I am not going to go into the specifics of troubleshooting deadlocks chances are if you have made it here you already know about using SQL Server Profiler. Deadlocks occur when two pid's are trying to use each others locked object at the same time. The most common cause I have encountered for deadlocks is timing, the longer the transaction or operation the higher the risk that another concurrent operation will create a deadlock situation.

Example

I encountered the deadlock below when a new table in the system began to grow quickly. The table in question was used in frequent and concurrent read/write operations. This situation is probably one that is frequently encountered and the solution in my case turned out to be very simple. Figure A can be distilled to the following sql:

CREATE TABLE A
(
 Id int IDENTITY PRIMARY KEY
)
CREATE TABLE B
(
 Id int IDENTITY PRIMARY KEY,
 AId int FOREIGN KEY REFERENCES A(Id)
)
CREATE TABLE C
(
 Id int IDENTITY PRIMARY KEY,
 BId int FOREIGN KEY REFERENCES B(Id)
)
CREATE TABLE D
(
 Id int IDENTITY PRIMARY KEY
 AId int FOREIGN KEY REFERENCES A(Id),
 CId int FOREIGN KEY REFERENCES C(Id),
)
//Insert some record into A,B,C,D ...//

//Left oval in figure A This is the deadlock victim//
DELETE B WHERE Id=1

//Right oval in figure B//
DELETE C WHERE Id=1

Explanation

You might have expected the statements above to contain a delete cascade option there aren't any. If you are looking for indexes on any there would only be the primary key created by SQL server. Which leads me to the root cause of this issue, table D grew to 500K+ rows which is about where the adventure began. As I mentioned earlier the common cause I have encountered... timing or bad timing in this case. The deletes in table B and table C were conflicting because none of the foreign keys had indexes on them making the constraint checks take longer leaving more time for deadlock to occur. After adding the following indexes the deadlocks were immediately resolved:

CREATE INDEX IX_D_TO_A_FK ON D
(
AId
)
CREATE INDEX IX_D_TO_C_FK ON D
(
CId
)

Saturday, June 8, 2013

Safari Webkit IOS6 and Brightness

Apple putting the bright in brightness

While working on an HTML5 application I decided to get fancy and use the webkit brightness effect. This worked great in chrome and even firefox but when testing in Safari something wasn't quite right. Apparently apple decided to implement the brightness function in a little different than the W3C specification. In Safari brightness is a scale -1 0 +1 where 0 is normal and -1 is completely black, the spec calls for 0 as black and 1 to be normal which.

-webkit-filter: brightness(0);

Monday, April 8, 2013

Windows Azure using git deployment with two different websites in the same repository

Excited about the Windows Azure git deployment feature I decided to give it a try with bitbucket. Wow, talk about dead simple invisible magic deployment, I was very surprised at how easy it was. The only hangup is that the process is only designed to handle a single site deployment, if you have more than one web project and/or solution it's time to roll up your sleeves and read some dudes blog.

The git deployment uses a project called kudu that executes when a deployment is triggered by the git repository service of your choice. The process runs a series of shell commands to script and execute the deployment. While looking for a solution I came across an article that explained how to deploy two sites one nodejs the other mvc4 using a conditional batch file. It is just as easy to use this same method for two or more sites deployments using app setting keys to negate any conditional logic, thus turning your deploy.cmd into one line of sweet deployment victory.

Try it your self using the following steps:

  1. In the windows azure portal select the website you want to include.
  2. Select the configuration tab
  3. Scroll Down to app settings
  4. Use the Key APPLICATION then enter the name of the application you will use this later
  5. Repeat steps 1-4 for the 2nd website you will be deploying
  6. On your local computer open a git prompt:
    npm install azure-cli -g
    You are running windows aren't you, that's ok the git prompt is a little bashful.
  7. This is going to install all kinds of azure magic, go get beverage and come back or stay and watch the console light show
  8. Now from your solution root:
    azure site deploymentscript --aspWAP {Relative Path To Your 1st Project}.csproj -s {Relative Path To Your Solution}.sln
    cp deploy.cmd {Application Name Used in 1st Azure App Settings}.cmd
    azure site deploymentscript --aspWAP {Relative Path To Your 2nd Project}.csproj -s {Relative Path To Your Solution}.sln
    cp deploy.cmd {Application Name Used in 2nd Azure App Settings}.cmd
    
  9. Now edit your deploy.cmd to look like this:
    @ECHO OFF
    %APPLICATION%.cmd
    
    Yes.. it's that easy!
  10. Commit your changes
  11. Check to see if they deployed in the azure portal deployments tab