The Correct way to send .Net Objects to Azure Service Bus Queues from .Net applications

Azure functions providers numerous types of triggers and they help in ensuring that functions can be executed based on certain action on those resources. Service Bus provides two major messaging subsystems

  • Queues and
  • Topics

Queues has one sender and one receiver for the messages. Once the message is read by a receiver, the message is removed from the queue. In contrast, Topics has one sender but multiple receivers. The message is replicated multiple times – once per receiver and they are removed as and when receivers read it.

Service bus messages can be read as string messages and that’s the default scaffolding code we can see in Visual Studio. Reading string messages works fine but has its limitations. The primary limitation is that there is no type checking and no specified schema for the message. It can contain any payload and would be accepted by the Azure function. There is no design time or runtime validation on the incoming message unless the code within the function does it explicitly.

The next image shows the boilerplate code generated by Visual Studio for a Service Bus Queue binding in an Azure function.

s1

And when sending a message will execute this function successfully.

s2

How can we make it better such that there is automatic validation and type check when a message is received by the function?

One of the features of Azure functions is that we can declare custom types within the function. Instead of accepting a string value, we can accept an object of this new custom type within our function signature. Generally, these types should be declared centrally and shared with multiple function.

First, we must understand that all incoming parameters to Azure functions are converted into JSON by Azure function runtime and Azure function runtime then converts this JSON payload to whatever the function parameter datatype is expected by the function.
Once the Azure function runtime sees that there is a custom type instead of a string while invoking the function, it will try to serialize the incoming message to the type rather than string. If the type composition is different than the content of the message, the validation will fail, and the function will not execute.

We can see this in action here.

First, Lets change the code for Azure function. Instead of accepting string argument, now it accepts a .Net object of type “Orderinfo”. The “OrderInfo” class is also declared within the function.

s3

 

Now, we can write a client application responsible for sending Queue messages to this queue. We would also need the definition of OrderInfo object on client side. The code for client side is shown next.s4

Now, these messages will be accepted by Azure function, validate if the incoming object can be serialized into OrderInfo object and continue execution of the function if it is successful.

s5

The client code creates an instance of OrderInfo class, fills up its properties with necessary values, serializes into JSON format, converts it into bytes and assigns it to the body property of Message object. Finally, the Message object is sent using the QueueClient object.

 

An important consideration is the client code is setting of the content type to “application/json”. If we do not set up this property, the function will fail with the following error message.

 

I have seen many developers on internet struggling with this error. The solution is to set the content type with the value of “Application/json”s6

 

Happy Coding !!

 

Using Text Analytics Key Phrase Cognitive Services API from PowerShell

There are abundant sample for executing Text Analytics Cognitive Services API from C# and other languages like node.js. While searching, I did not find any examples of consuming Text Analytics API through Powershell and this blog is all about it.

In this blog post, I am going to show how to use Text Analytics Key Phrase Cognitive Services API to extract key phrases from a given sentence or paragraph. Cognitive Services are REST api that can be invoked by any language and from any platform. They are build using industry standards and message exchange happens through JSON payloads.

It is important to understand that Cognitive Services are services provided as PaaS service from Azure. You need a valid Azure subscription and need to provision Cognitive Services resource in a resource group. While provisioning this resource, Text Analytics API service should be chosen as API type. Text Analytics API service contains a set of REST api and one of them is related to Key Phrase extraction. The same has been shown in Figure 1.

Cognitive Service Text Analytics

After the Service is provisioned, it generates a set of unique key associated with the service. Any client that wants to invoke and consume this instance of cognitive Services should send this Key with request. The Service will validate the key and if it matches the key it holds will allow successful execution of the request.

Now that the service is provisioned, it’s time to write the client using Powershell.

 

Open your favorite Powershell console and write the script shown next. The code is quite simple and few statements.

# uri of the KeyPhrases api related to OCR

$keyPhraseURI = “https://westus.api.cognitive.microsoft.com/text/analytics/v2.0/keyPhrases”

# key to identify a valid request. You should provide your own key

$apiKey = “xxxxxxxxxxxxxxxxxxxxxxx”

# preparing JSON document as message payload

$documents = @()

$message = @{“language” = “en”; “id” = “1”; “text” = “I had a wonderful experience! The rooms were wonderful and the staff were helpful.” };

$documents += $message

$final = @{documents = $documents}

$messagePayload = ConvertTo-Json $final

# invoking Key Phrase rest api

$result = Invoke-RestMethod -Method Post -Uri $keyPhraseURI -Header @{ “Ocp-Apim-Subscription-Key” = $apiKey } -Body $jsbod -ContentType “application/json” -ErrorAction Stop

The code is well commented but to understand it better the first line declares a variable to hold the key required for identifying with Cognitive Services Identity provider. You should provide your own key. The url to Text Analytics Key Phrase REST Api.

Next set of statements are preparing the JSON message payload that should be passed to the REST api as part of request body. A hashtable is declared containing language, id and text key value pairs. It is converted into JSON format and the last line invokes the REST api using Invoke-RestMethod cmdlet passing in the Uri, header containing custom item, the body and content type. It is important that header must contain Ocp-Apim-Subscription-Key custom header with API key as it value. The request will fail if this header is missing or it contains invalid key.

The response object is a JSON object containing the text extracted by Text Analytics service.

Executing $result.documents.keyPhrases on the console will return the text extracted by Text Analytics service as shown next

PS C:\Users\rimodi> $result.documents.keyPhrases

staff

wonderful experience

rooms

Hope you liked the blog post. Please send your feedback and if you would like to stay connected, you can connect through twitter @automationnext and LinkedIn @ https://www.linkedin.com/in/ritesh-modi/

Happy coding and Cheers!

 

Powershell Desired State Configuration Partial Configurations without ConfigurationID

Overview

One of the most awaited and interesting features of WMF 5 DSC is Partial Configuration. Until WMF 5 DSC, it was difficult to split large configuration into multiple smaller configurations. Partial Configuration enables us to split configuration into multiple smaller configuration fragments across multiple files. Partial configurations are implemented exactly the same way as any general DSC configuration. It is the responsibility of LCM on a target server to combine all the configuration fragments into a single configuration and apply it.

The configuration fragments and files doesn’t indicate in any way the existence of Partial Configuration. Each Partial Configurations is complete in itself and can be applied independently as a configuration to any server. Partial Configuration are deployed on pull server following a sequence of steps and the target node LCM is configured to download these partial configurations, combine them and apply it on the host. The magic of Partial Configuration is conducted by the LCM.

Partial Configurations works with DSC Pull, push as well as Mixed mode. In this blog we will delve deeper into the realms of partial configuration in pull mode. It means that LCM of servers in a network should be configured to pull configurations from Pull Server (web server or SMB Share) and it should be able to identify the configurations distinctly on these pull servers.

All preview releases of WMF 5 had partial configurations available as a feature but they worked using one of the properties of LCM known as ConfigurationID whose value is a GUID. With the RTM release, Partial configurations works with ConfigurationID but at the same time it also works when ConfigurationID is not provided. This is a huge leap from previous releases as now there is no need to remember Configuration IDs as part of DSC Configuration names. Now, the configurations can be referred just by their name. It is much more natural and easier to use and manage.

Benefits of Partial Configuration

Some of the benefits of Partial Configurations are

  1. Multiple authors can author configurations independently and simultaneously for servers in a network.
  2. Incremental configurations can be applied to servers without modifying any existing configurations.
  3. Modular authoring of configurations.
  4. Removed dependencies on single MOF file. This was the case in DSC v1 where only one MOF file was allowed and applied to a server at a given point of time. Newer configuration (MOF) would replace the current configuration in DSC v1.

Steps for using Partial Configuration

For making Partial Configuration work in technical preview, following steps should be performed.

  1. Creation of Pull Server
  2. Configuring LCM MetaConfiguration of servers in the network.
  3. Authoring Configurations
  4. Deploying Configurations on the pull server.

We will not go into details of creating a pull server. I would be covering that in a separate blog. We will assume that pull servers are already deployed and configured for the purpose of this blog.

There can be more than one pull server within an enterprise and so to make the example shown in this blog more realistic would assume that there are two pull servers. The names of the pull server are marapwkvm0 and SQLWitness0. The name of the target node which will pull partial configurations from these two server is marapdovm. We also have two configurations each deployed to one of the pull servers. The LCM of a target machine (marapdovm) will be configured with these two pull servers and configurations.

LCM Configuration

Let’s now focus towards configuring a server’s LCM Configuration. Specifically, we need to configure

  1. RefreshMode with value of “Pull
  2. Optionally but desirable ConfigurationMode value to “ApplyandAutoCorrect” to keep the server in expected state. Also, from blog perspective, we will be able to see something tangible.
  3. RefreshMode is set to Pull mode within the settings block. This will make all partial configuration use pull mode.
  4. Multiple ConfigurationRepositoryWeb resource instances each representing a pull server. The url of the pull server on marapwkvm0 is https://marapwkvm0:8090/PSDSCPullServer.svc/ running on port 8090. The url of the pull server on SQLWitness0 is https://sqlwitness0:8100/PSDSCPullServer.svc/ and it is running on port 8100.
    • Each pull server is configured with a RegistrationKey. This is a shared key between the target node and pull server. The RegistrationKey for respective pull servers should be provided within this block. It has been whitened out due to security reasons. You should put your own RegistrationKey for these values.
    • ConfigurationNames is a new property added to ConfigurationRepositoryWeb. This property determines the configurations that should be downloaded and applied on the target node. It is an array property and can contain multiple configuration names. The names should match exactly to the deployed configuration.
  5. Multiple PartialConfiguration resource instances each representing a configuration on a pull server. On pull server marapwkvm0, a configuration named “IISInstall” is deployed whose whole purpose is to install IIS while on pull server SQLWitness0 another configuration named “IndexFile” is deployed whose purpose is to generate a .htm file with some content. The names of the partial configuration should match the configurations available on the pull server as well as the names provided as values to “ConfigurationNames” property of ConfigurationRepositoryWeb.

The entire code for LCM configuration is shown here. This code should be run on target node. In our case it is marapdovm.

partialconfig-1

The above code should be executed only after the partial configurations are authored and deployed on the respective pull servers.

Just to ensure repeating it again. PartialConfiguration block defines configuration fragments. Two partial configurations “IISInstall” and “IndexFile” are defined. “IISInstall” configuration is available on IISConfig pull server while “IndexFile” configuration is available on FileConfig pull server. Important to note are the names of the partial configurations because they should match exactly with the names of the configurations on pull server. You will see next that “IISInstall” configuration is authored and available on PullServer1 and “IndexFile” configuration is available on PullServer2. “ConfigurationSource” property attaches the Pull Server to the partial configuration.

IISInstall Configuration

This is a simple configuration responsible for installing IIS (Web-Server) on a server using WindowsFeature resource. Execution of the configuration would result in generation of MOF file. A corresponding checksum file is also generated for the mof file. Both the files – mof and checksum is copied over to the ConfigurationPath folder which in my case is “C:\Program Files\WindowsPowershell\DSCservice\Configuration”. The configuration uses localhost as node name however while copying the files, they are renamed same as the configuration name.IISInstall-1

New-DSCChecksum command is responsible for generating the checksum for the configuration mof file. Both IISInstall.mof and IISInstall.mof.checksum should be available now at “C:\Program Files\WindowsPowershell\DSCservice\Configuration” folder on marapwkvm0 server.

IndexFile Configuration

This is again a simple configuration responsible for creating a .htm file at C:\inetpub\wwwroot folder on server using WindowsFeature resource. Execution of the configuration would result in generation of MOF file. A corresponding checksum file is also generated for the mof file. Both the files – mof and checksum is copied over to the ConfigurationPath folder which in my case is “C:\Program Files\WindowsPowershell\DSCservice\Configuration. The configuration uses localhost as node name however while copying the files, they are renamed same as the configuration name.

IndexFile-1

Both IndexFile.mof and IndexFile.mof.checksum should be available now at “C:\Program Files\WindowsPowershell\DSCservice\Configuration” folder on SQLWitness0 server.

Now, it time to move to the target node and apply the LCM configuration that we authored earlier.
Execute the below command to apply the configuration related to LCM on the target node.

Set-DscLocalConfigurationManager -path “C:\PartialConfigurationDemo” -Force -Verbose.

Below is the output we should be able to see

Set-LCM-1

After the LCM configuration is modified for making partial configurations work, it’s time to apply the configuration by asking LCM to pull the configuration from pull servers.

Execute the below command to pull, store and combine the configurations on the target node.

Update-DscConfiguration -wait -Verbose

Below is the output we should see

Update-Config-1

The above command will download the configurations, combine them and put them into pending state. It will not apply immediately. When the LCM is invoked again depending on the value of ConfigurationModeFrequencyMins, the configuration will be applied based on the value of ConfigurationMode. In our case, it will apply the configuration and also auto correct it.

To execute the configuration immediately, run the following command

Start-DscConfiguration -UseExisting -Wait -Force -Verbose

Below is the output we should see

final-Output-1

VOILA!!! You can see that both the configuration containing their resource (IIS and IndexFile) are applied to the server.

We have applied partial configurations to a node by referring to the configurations by their names instead of using ConfigurationID GUIDs.
This is just the tip of the beginning and stay tuned for more detailed information.

If you like this post please share and if you have any feedback please share that too.

In next post will get in deeper with more Partial Configuration on WMF 5 RTM.

Cheers!!

Installing WMF 5.0 April preview release

Before installing WMF 5.0 April preview release, we have to first download it. WMF 5.0 April preview release can be downloaded from WMF 5.0 Download

Also, before installing WMF 5.0 April preview release, remember to save all your files and close applications as it would ask for restart of the server.

Installing WMF 5.0 April release is quite simple. However, ensure that the following updates are uninstalled from the Operating system before installing it.

  1. KB3055381
  2. KB3055377
  3. KB2908075

Also, there are different installers for different operating system. Based on your operating system and its processor type, proper installer should be chosen.

Windows Server 2012 R2, Windows 8.1 Pro, and Windows 8.1 Enterprise

  1. x64: WindowsBlue-KB3055381-x64.msu
  2. x86: WindowsBlue-KB3055381-x86.msu

Windows Server 2012

  1. x64: Windows8-KB3055377-x64.msu

Windows 7 SP1 and Windows Server 2008 R2 SP1

  1. x64: Windows6.1-KB2908075-x64.msu
  2. x86: Windows6.1-KB2908075-x86.msu

In this case, I am installing on 64 bit Windows Server 2012 R2 machine and so I choose “x64: WindowsBlue-KB3055381-x64.msu” installer.

Double click on the installer will start the process of installing WMF 5.0 April preview release.

april-01-01

Click on the Open button. It will further ask for confirmation to install. click on “yes” button.

Accept the EULA and the process of installation will start. It will approximately take a minute to install.

april-01-02

april-01-03

It will ask for restart of the server. Click on “Restart Now” button”.

After restart, you can go to control Panel | Program and Features | view Installed Updates to check the previous installation.

april-01-04

You can also view successful installation through Powershell using the Get-Hotfix cmdlet as shown below.

april-01-05

You should be able to see the below Powershell and WSMan versions as well.

april-01-06

Hope you enjoyed this post..

Cheers!!