PowerShell Community https://devblogs.microsoft.com/powershell-community/ A place for the community to learn PowerShell and share insights Mon, 29 Apr 2024 16:12:01 +0000 en-US hourly 1 https://devblogs.microsoft.com/powershell-community/wp-content/uploads/sites/69/2024/10/Microsoft-favicon-48x48.jpg PowerShell Community https://devblogs.microsoft.com/powershell-community/ 32 32 Encrypting secrets locally https://devblogs.microsoft.com/powershell-community/encrypting-secrets-locally/ https://devblogs.microsoft.com/powershell-community/encrypting-secrets-locally/#comments Mon, 29 Apr 2024 15:57:26 +0000 https://devblogs.microsoft.com/powershell-community/?p=1227 Keeping security folks happy (or less upset which is the best we can hope for)

The post Encrypting secrets locally appeared first on PowerShell Community.

]]>
If you are involved in support or development, often you need to use secrets, passwords, or subscription keys in PowerShell scripts. These need to be kept secure and separate from your scripts but you also need access to them ALL THE TIME.

So instead of hand entering them every time they should be stored in a key store of some sort that you can access programmatically. Often off the shelf keystores are not available in your environment or are clumsy to access with PowerShell. A simple way to have easy access to these secrets with PowerShell would be helpful.

You could simply have them in plain text, on your machine only, making it relatively secure. However, there are many risks with this approach, so adding some additional security is an excellent idea.

The .NET classes sitting behind PowerShell provide some simple ways to do this. This blog will go through

  • Basic encryption / decryption
  • Using it day-to-day
  • Your own form-based key store

Basic encryption / decryption

The protect and unprotect methods available as part of the cryptography classes are easy to use. However they use Byte arrays that we can simplify by wrapping their use in a String.

The following examples can be found at the MachineAndUserEncryption.ps1 module in my ps-community-blog repository on GitHub.

Encryption

Function Protect-WithUserKey {
    param(
        [Parameter(Mandatory=$true)]
        [string]$secret
    )
    Add-Type -AssemblyName System.Security
    $bytes = [System.Text.Encoding]::Unicode.GetBytes($secret)
    $SecureStr = [Security.Cryptography.ProtectedData]::Protect(
        $bytes,     # contains data to encrypt
        $null,      # optional data to increase entropy
        [Security.Cryptography.DataProtectionScope]::CurrentUser # scope of the encryption
    )
    $SecureStrBase64 = [System.Convert]::ToBase64String($SecureStr)
    return $SecureStrBase64
}

Just going through the lines we can see

  1. PowerShell needs to know about the .NET classes (I have tested under version 5 & 7 of PowerShell)
  2. We need to convert our string into a Byte array
  3. Use the .NET class to encrypt
  4. Convert the encrypted Byte array to a string for easy storage and retrieval
  5. Return that string

Decryption

Function Unprotect-WithUserKey {
    param (
        [Parameter(Mandatory=$true)]
        [string]$enc_secret
    )
    Add-Type -AssemblyName System.Security
    $SecureStr = [System.Convert]::FromBase64String($enc_secret)
    $bytes = [Security.Cryptography.ProtectedData]::Unprotect(
        $SecureStr,     # bytes to decrypt
        $null,          # optional entropy data
        [Security.Cryptography.DataProtectionScope]::CurrentUser) # scope of the decryption
    $secret = [System.Text.Encoding]::Unicode.GetString($bytes)
    return $secret
}

Steps are identical for the decryption, using slightly different methods

  1. PowerShell needs to know about the .NET classes
  2. We need to convert our string into a Byte array
  3. Use the .NET class to decrypt
  4. Convert the encrypted Byte array to a string
  5. Return that string

Using it day-to-day

This is really useful if you are doing repetitive tasks that need these values. Often in a support role, investigations using API’s can speed up the process of analysis, and also provide you with a quick way to do fixes that don’t require heavy use of a GUI based environment.

Assigning a key to a secret value, and storing that in a hash table format is the simplest way to have access to these values AND keep them stored locally with a degree of security. Your code can then dynamically look up these values, and if other support people store the same key locally the same way (often with different values, think of an API password and or username pair) then your script can work for everyone.

Again, MachineAndUserEncryption.ps1 in my repository on my GitHub has functions for persisting and using this information. For compatibility with version 5 & 7 you also need the function ConvertToHashtableV5.

I would also recommend using Protect-WithMachineAndUserKey and Unprotect-WithMachineAndUserKey when implementing locally, they add another layer of protection.

Your own form-based key store

If you have followed my other 2 blogs about a scalable environment and simple form development then using the resources from these we can easily create our own form to manage our secrets. In fact, if you have downloaded and installed the modules for either of those blogs (they are the same, and this blog references the same as well), you have it ready to go.

Once you have your environment set up, simply run the cmdlet:

New-EncryptKeyForm

and if all is set up correctly, you should see

key-value-secret-store

Conclusion

Balancing the pragmatic ease of use and security concerns around secrets you may need to use all day every day can be a fine balancing act. Using some simple methods, we can strike that balance and hopefully be securely productive.

Lets secure some stuff!

The post Encrypting secrets locally appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/encrypting-secrets-locally/feed/ 7
Simple form development using PowerShell https://devblogs.microsoft.com/powershell-community/simple-form-development-using-powershell/ https://devblogs.microsoft.com/powershell-community/simple-form-development-using-powershell/#comments Mon, 11 Mar 2024 14:04:17 +0000 https://devblogs.microsoft.com/powershell-community/?p=1198 Create .NET forms without all the .NET.

The post Simple form development using PowerShell appeared first on PowerShell Community.

]]>
PowerShell is a tool for the command line. Most people who use it are comfortable with the command line. But sometimes, there are valid use cases to provide Graphical User Interface (GUI).

Important caveat

As PowerShell developers we need to be careful. We can do insanely complicated things with GUI’s (and the .NET classes), and that is not a rod we want to make for our own back!

Forms are based on .NET classes, but I have implemented a framework, so you do nothing more than create a JSON configuration and write simple functions in PowerShell. These functions are event-based functions contained in PowerShell cmdlets.

I am going to break this post into 3 parts:

  • Lets just get some forms up and running
  • How does all that work
  • Use cases for forms and PowerShell

Lets just get some forms up and running

  1. Download my ps-community-blog repository.

  2. If you know about PowerShell modules, add all the modules, or ALL the ps1 files to your current setup. If you don’t, that is OK, have a quick read of Creating a scalable, customised running environment, which shows you how to set up your PowerShell environment. The instructions in that post are actually for the same repository that this post uses, so it should be pretty helpful.

  3. Restart your current PowerShell session, which should load all the new modules.

  4. In the PS terminal window, run the cmdlet.

    New-SampleForm

    launching-a-simple-form

    The PS terminal window that you launch the form from is now a slave to the form you have opened. I basically use this as an output for the user, so put it next to the opened form. If you have made it this far, thats it! If not, review your Profile.ps1 as suggested in Creating a scalable, customised running environment.

  5. Press the buttons and see what happens. You should see responses appear in the PS terminal window. The tram buttons call an API to get trams approaching stops in Melbourne, Australia for the current time. The other two buttons are just some fun ones I found when searching for functionality to show in the forms.

How do I create my own forms

Rather than following documentation (which, lets be honest, I have not written), understanding the basics, and copying the examples is really the quickest way. Lets look at the SampleForm and work it through. You need a matching json and ps1 form.

json-and-cmdlet

I am not going to go into all the specifics, they should be obvious from the examples. But basically, a form has a list of elements, and they are placed at an x-y coordinate based on the x-y attribute in the element. When creating elements, the following is important:

  • Create a base json file of the right form size, with nothing in it.
  • Create base matching cmdlet with only # == TOP == and # == BOTTOM == sections in it. These 2 sections are identical in all form cmdlets.
  • Restart your PowerShell session to pick up the new cmdlet.
  • Add in elements 1 by 1 to the json file, getting them in the right position. You run the cmdlet after making changes to the json file.
  • Important: follow a naming convention, type_form_specificElement, for two reasons.
    1. Firstly you can’t have the same name for an element on the form
    2. Secondly, if you start getting fancy and having tabs, including the form in the name is going to help you immensely. (I had to do a lot of refactoring when I added in tabs!)
  • Add in the Add_Click functions for your buttons. In keeping it simple, most of your functionality will be driven by your buttons. After updating your cmdlets, you will need to restart your PowerShell session to pick up the changes. I have found that using VS Code and PowerShell plugins and restarting PowerShell sessions is much cleaner than trying to unload, and load modules when you update/add cmdlets.

And that is it. As a good friend/co-worker of mine says, it sounds easy when you say it quick, but the devil is in the detail. It can also be hard to debug.

An easy way to debug is to create a ps1 file with 1 line, the New-Form cmdlet. Running this in debug with breakpoints is the easiest way to debug.

With just this, and some diving into the other examples, you will be surprised the amount of functionality you can expose through your own GUI.

How does all that work

PowerShell has access to all the .NET classes sitting underneath it and it has a rich and well developed set of widgets to add to forms. Now I am not a .NET developer, but it is pretty intuitive.

Load the Assemblies and look at the base cmdlets

Inside GeneralUtilities.psm1 you will see:

Get-ChildItem -Path "$PSScriptRoot\*.ps1" | ForEach-Object{
    . $PSScriptRoot\$($_.Name)
}
Add-Type -assembly System.Windows.Forms
Add-Type -AssemblyName System.Drawing
  • The first lines are my standard practice to load all the cmdlets in the module
  • The Add-Type lines here are the crucial ones. They tell the PowerShell session to load the .NET classes required for forms to function.
  • Inside the GeneralUtilities module are 3 important cmdlets
    • Set-FormFromJson is sort of the driver, reads the json file, and iterates over all the elements, loading them onto the form by calling..
    • Set-FormElementsFromJson which is where all the heavy .NET lifting is done. .NET Forms have been around so long, and are so consistent (and trust me, coming from an early 2000’s web developer, this is wonderful), that with a basic switch, you can implement them all very easily and expose the features easily through our JSON configuration. This could be developed infinitely more, but see the caveat at the start of this post – KISS is very important.
    • ConvertTo-HashtableV5 One of the most useful techniques in PowerShell is to always use the native objects (hashes and lists) so that the operations are consistent. I have found this particularly relevant for JSON files. I have included this as I rely on it heavily due to PowerShell 5 having some deficiencies in this area. I like to have all my stuff work in PowerShell 5 AND 7. It is based on a post Convert JSON to a PowerShell hash table.

Creating a form

function New-SampleForm {
    [CmdletBinding()]
    param ()
    # ===== TOP =====
    $FormJson =  $PSCommandPath.Replace(".ps1",".json")
    $NewForm, $FormElements = Set-FormFromJson $FormJson

    # ===== Single Tab =====
    # All your button clicks etc.

    # ===== BOTTOM =====
    $NewForm.ShowDialog()
}
Export-ModuleMember -Function New-SampleForm

The above is a template for creating any form. I am a firm believer of convention over configuration. It makes for less code and simpler design. With that in mind:

  • New-Sample cmdlet should be in file NewSample.ps1.
  • NewSample.json will be the configuration file for the form.
  • The TOP section finds the json file for the cmdlet based on convention, then loads all the elements.
  • The BOTTOM section makes the form appear.
  • TOP and BOTTOM sections will not change between different forms.

Everything else in between is where the fun happens. Copy and paste Add_Click functions, rename them following your JSON configuration, and you are away.

Use cases for forms and PowerShell

Quick access to common support tasks

The support team I am involved with have gone through a maturation of using PowerShell for support tasks over the last couple of years. We started just writing small cmdlets to do repeatable tasks. Stuff to do with file movement, Active Directory changes, data manipulation. Next we made some cmdlets to access vendors API’s that helped us do tasks quickly instead of through the vendor GUI application.

All this functionality is now available through a tool that all the support guys use daily, and have even started contributing to.

Postman for ‘one thing’

If you don’t know Postman, it is a tool used to test API’s / Web Services and is one of a modern developers most useful tools. But we have some very technically savvy users, that are not developers, and the ability for them to use some complex API’s dramatically improves their productivity (especially in non-production). Its too easy to make mistakes in Postman, and for repeatable tasks with half dozen inputs, we now have a tool that does some basic validation, and hits the API endpoint with consistent and useful data.

Conclusion

You can get some big bang for minimal effort with the .NET Forms and help your fellow workers in an environment that may just be a bit easier for some of them than native cmdlets. Sooooo…

Lets break some stuff!

The post Simple form development using PowerShell appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/simple-form-development-using-powershell/feed/ 1
Creating a scalable, customised running environment https://devblogs.microsoft.com/powershell-community/creating-a-scalable-customised-running-environment/ https://devblogs.microsoft.com/powershell-community/creating-a-scalable-customised-running-environment/#comments Fri, 23 Feb 2024 14:04:37 +0000 https://devblogs.microsoft.com/powershell-community/?p=1170 This post shows how to create an easy to support environment with all your own cmdlets.

The post Creating a scalable, customised running environment appeared first on PowerShell Community.

]]>
Often people come to PowerShell as a developer looking for a simpler life, or as a support person looking to make their life easier. Either way, we start exploring ways to encapsulate repeatable functionality, and through PowerShell that is cmdlets.

How to create these is defined well in Designing PowerShell For End Users. And Microsoft obviously have pretty good documention, including How to Write a PowerShell Script Module. I also have a few basic rules I remember wehen creating cmdlets to go along with the above posts:

  • Always use cmdlet binding.
  • Call the file name the same as the cmdlet, without the dash.

But how do you organise them and ensure that they always load. This post outlines an approach that has worked well for me across a few different jobs, with a few iterations to get to this point.

Methods

There are 2 parts to making an effective running environment

  • Ensuring all your cmdlets for a specific module will load.
  • Ensuring all your modules will load.

Example setup

folder-structure

We are aiming high here. Over time your functionality will grow and this shows a structure that allows for growth. There are 3 modules (effectively folders): Forms, BusinessUtilities and GeneralUtilities. They are broken up into 2 main groupings, my-support and my-utilities. ps-community-blog is the GitHub repository where you can find this example.

Inside the GenreralUtilities folder you can see the all-important .psm1, with the same name as the folder and a couple of cmdlets I have needed over the years. The .psm1 file is a requirement to create a module.

Ensuring all your cmdlets for a specific module will load

Most descriptions of creating modules will explain that you need to either add the cmdlet into the .psm1, or load the cmdlet files in the .psm1 file. Instead, put the below in ALL your .psm1 module files:

Get-ChildItem -Path "$PSScriptRoot\*.ps1" | ForEach-Object {
    . $PSScriptRoot\$($_.Name)
}

What does this do and why does it work?

  • At a high level, it iterates over the current folder, and runs every .ps1 file as PowerShell.
  • $PSScriptRoot is the key here, and tells running session, what the location of the current code is.

This means you can create cmdlets under this structure, and they will automatically load when you start up a new PowerShell session.

Ensuring all your modules will load

So, the modules are sorted. How do we make sure the modules themselves load? It’s all about the Profile.ps1. You will either find it or need to create it in:

  • PowerShell 5 and lower – $HOME\Documents\WindowsPowerShell\Profile.ps1.
  • PowerShell 7 – $HOME\Documents\PowerShell\Profile.ps1.
  • For detailed information, see About Profiles.

So this file runs at the start of every session that is opened on your machine. I have included both 5 and 7, as in a lot of corporate environments, 5 is all that is available, and often people don’t have access to modify their environment. With some simple code we can ensure our modules will open. Add this into your Profile.ps1:

Write-Host "Loading Modules for Day-to-Day use"
$ErrorActionPreference = "Stop" # A safeguard for any errors

$MyModuleDef = @{
    Utilities = @{
        Path    = "C:\work\git-personal\ps-community-blog\my-utilities"
        Exclude = @(".git")
    }
    Support = @{
        Path    = "C:\work\git-personal\ps-community-blog\my-support"
        Exclude = @(".git")
    }
}

foreach ($key in $MyModuleDef.Keys) {
    $module  = $MyModuleDef[$key]
    $exclude = $module.Exclude

    $env:PSModulePath = @(
        $env:PSModulePath
        $module.Path
    ) -join [System.IO.Path]::PathSeparator

    Get-ChildItem -Path $module.Path -Directory -Exclude $exclude |
        ForEach-Object {
            Write-Host "Loading Module $($_.Name) in $Key"
            Import-Module $_.Name
        }
}

What does this do and why does it work?

  • At a high level, it defines your module groupings, then loads your modules into the PowerShell session.
  • $MyModuleDef contains the reference to your module groupings, to make sure all the sub folders are loaded as modules.
  • Exclude is very important. You may load the code directly of your code base, so ignoring those as modules is important. I have also put DLL’s in folders in module groupings, and ignoring these is important as well.

Now, every time you open any PowerShell session on your machine, all your local cmdlets will be there, ready to use with all the wonderful functionality you have created.

Conclusion

Having your own PowerShell cmdlets at your fingertips with minimal overhead or thinking makes your PowerShell experinece so very much more rewarding. It also makes it easier to do as I like to do and start the day with my favourite mantra:

Lets break some stuff!

The post Creating a scalable, customised running environment appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/creating-a-scalable-customised-running-environment/feed/ 2
Using PowerShell and Twilio API for Efficient Communication in Contact Tracing https://devblogs.microsoft.com/powershell-community/powershell-twilio-contact-tracing-communication/ Wed, 29 Nov 2023 23:10:20 +0000 https://devblogs.microsoft.com/powershell-community/?p=1114 Learn to integrate PowerShell with Twilio API and streamline communication for COVID-19 contact tracing initiatives.

The post Using PowerShell and Twilio API for Efficient Communication in Contact Tracing appeared first on PowerShell Community.

]]>
The COVID-19 pandemic has underscored the importance of rapid and reliable communication technology. One vital application is in contact tracing efforts, where prompt notifications can make a significant difference. This guide focuses on utilizing PowerShell in conjunction with the Twilio API to establish an automated SMS notification system, an essential communication tool for contact tracing.

Integrating Twilio with PowerShell

Before diving into scripting, you need to create a Twilio account.

Registering and Preparing Twilio Credentials

Once registered, obtain your Account SID and Auth Token. These credentials are the keys to accessing Twilio’s SMS services. Then, choose a Twilio phone number, which will be the source of your outgoing messages.

PowerShell Scripting to Send SMS via Twilio

With your Twilio environment prepared, the next step is to configure PowerShell to interact with Twilio’s API. Start by storing your Twilio credentials as environmental variables or securely within your script, ensuring they are not exposed or hard-coded.

$twilioAccountSid = 'Your_Twilio_SID'
$twilioAuthToken = 'Your_Twilio_Auth_Token'
$twilioPhoneNumber = 'Your_Twilio_Number'

After the setup and with the appropriate Twilio module installed, crafting a PowerShell script to dispatch an SMS using Twilio’s API is straightforward:

Import-Module Twilio

$toPhoneNumber = 'Recipient_Phone_Number'
$credential = [pscredential]:new($twilioAccountSid,
    (ConvertTo-SecureString $twilioAuthToken -AsPlainText -Force))

# Twilio API URL for sending SMS messages
$uri = "https://api.twilio.com/2010-04-01/Accounts/$twilioAccountSid/Messages.json"

# Preparing the payload for the POST request
$requestParams = @{
    From = $twilioPhoneNumber
    To = $toPhoneNumber
    Body = 'Your body text here.'
}

$invokeRestMethodSplat = @{
    Uri = $uri
    Method = 'Post'
    Credential = $credential
    Body = $requestParams
}

# Using the Invoke-RestMethod command for API interaction
$response = Invoke-RestMethod @invokeRestMethodSplat

Execute the script, and if all goes as planned, you should see a confirmation of the SMS being sent.

Preparing Data for Automated Notifications

Before we can automate the sending of notifications, we need to have our contact data organized and accessible. This is typically done by creating a CSV file, which PowerShell can easily parse and utilize within our script.

Creating a CSV File

A CSV (Comma-Separated Values) file is a plain text file that contains a list of data. For contact tracing notifications, we can create a CSV file that holds the information of individuals who need to receive SMS alerts. Here is an example of what the content of this CSV file might look like:

Name,Phone
John Doe,+1234567890
Jane Smith,+1098765432
Alex Johnson,+1123456789

In this simple table, each column is separated by a comma. The first row is the header, which describes the content of each column. Subsequent rows contain the data for each person, with their name and phone number.

Automating the Process for Contact Tracing

Once manual sending is confirmed and the CSV file is ready, you can move towards automating the process for contact tracing:

Import-Module Twilio

$contactList = Import-Csv -Path 'contact_list.csv'

# Create Twilio API credentials
$credential = [pscredential]:new($twilioAccountSid,
    (ConvertTo-SecureString $twilioAuthToken -AsPlainText -Force))

# Twilio API URL for sending SMS messages
$uri = "https://api.twilio.com/2010-04-01/Accounts/$twilioAccountSid/Messages.json"

foreach ($contact in $contactList) {
    $requestParams = @{
        From = $twilioPhoneNumber
        To = $contact.Phone
        Body = "Please be informed of a potential COVID-19 exposure. Follow public health guidelines."
    }

    $invokeRestMethodSplat = @{
        Uri = $uri
        Method = 'Post'
        Credential = $credential
        Body = $requestParams
    }
    $response = Invoke-RestMethod @invokeRestMethodSplat

    # Log or take action based on $response as needed
}

By looping through a list of contacts and sending a personalized SMS to each, you’re leveraging automation for mass communication—a critical piece in the contact tracing puzzle.

Conclusion

In this post, we’ve reviewed how to establish a bridge between PowerShell and Twilio’s messaging API to execute automated SMS notifications. Such integrations are at the heart of communication technology advancements, facilitating critical public health operations like contact tracing.

References

The post Using PowerShell and Twilio API for Efficient Communication in Contact Tracing appeared first on PowerShell Community.

]]>
Automate Text Summarization with OpenAI and PowerShell https://devblogs.microsoft.com/powershell-community/automate-text-summarization-with-openai-powershell/ https://devblogs.microsoft.com/powershell-community/automate-text-summarization-with-openai-powershell/#comments Mon, 20 Nov 2023 20:57:54 +0000 https://devblogs.microsoft.com/powershell-community/?p=1108 This easy-to-follow guide shows you how to use PowerShell to summarize text using OpenAI's GPT-3.5 API.

The post Automate Text Summarization with OpenAI and PowerShell appeared first on PowerShell Community.

]]>
Automating tasks is the core of PowerShell scripting. Adding artificial intelligence into the mix takes automation to a whole new level. Today, we’ll simplify the process of connecting to OpenAI’s powerful text summarization API from PowerShell. Let’s turn complex AI interaction into a straightforward script.

To follow this guide, you’ll need an OpenAI API key. If you don’t already have one, you’ll need to create an OpenAI account or sign in to an existing one. Next, navigate to the API key page and create a new secret key to use.

Step-by-Step Function Creation

Step 1: Define the Function and Parameters

We’ll start by setting up our function with parameters such as the API key and text to summarize:

function Invoke-OpenAISummarize {
    param(
        [string]$apiKey,
        [string]$textToSummarize,
        [int]$maxTokens = 60,
        [string]$engine = 'davinci'
    )
    # You can add or remove parameters as per your requirements
}

Step 2: Set Up API Connection Details

Next, we’ll prepare our connection to OpenAI’s API by specifying the URL and headers:

    $uri = "https://api.openai.com/v1/engines/$engine/completions"
    $headers = @{
        'Authorization' = "Bearer $apiKey"
        'Content-Type' = 'application/json'
    }

Step 3: Construct the Body of the Request

We need to tell the API what we want it to do: summarize text. We do this in the request body:

    $body = @{
        prompt = "Summarize the following text: `"$textToSummarize`""
        max_tokens = $maxTokens
        n = 1
    } | ConvertTo-Json

Step 4: Make the API Request and Return the Summary

The final part of the function sends the request and then gets the summary back from the API:

    $parameters = @{
        Method      = 'POST'
        URI         = $uri
        Headers     = $headers
        Body        = $body
        ErrorAction = 'Stop'
    }

    try {
        $response = Invoke-RestMethod @parameters
        return $response.choices[0].text.Trim()
    } catch {
        Write-Error "Failed to invoke OpenAI API: $_"
        return $null
    }
}

Running the Function

Now, to use the function, you just need two pieces of information: your OpenAI API key and the text to summarize.

$summary = Invoke-OpenAISummarize -apiKey 'Your_Key' -textToSummarize 'Your text...'
Write-Output "Summary: $summary"

Replace 'Your__Key' with your actual key and 'Your text...' with what you want to summarize.

Here’s a how I am running this function in my local PowerShell prompt, I copied the text from Wikipedia:

$summary = Invoke-OpenAISummarize -apiKey '*********' -textToSummarize @'
PowerShell is a task automation and configuration management program from
Microsoft, consisting of a command-line shell and the associated scripting
language. Initially a Windows component only, known as Windows PowerShell,
it was made open-source and cross-platform on August 18, 2016, with the
introduction of PowerShell Core.[5] The former is built on the .NET Framework,
the latter on .NET (previously .NET Core).
'@

and I get the following result:

PowerShell, initially Windows-only, is a Microsoft automation tool that became
cross-platform as open-source PowerShell Core, transitioning from .NET Framework
to .NET.

Conclusion

Combining AI with PowerShell scripting is like giving superpowers to your computer. By breaking down each step and keeping it simple, you can see how easy it is to automate text summarization using OpenAI’s GPT-3.5 API. Now, try it out and see how you can make this script work for you!

Remember, the beauty of scripts is in their flexibility, so feel free to tweak and expand the function to fit your needs.

Happy scripting and enjoy the power of AI at your fingertips!

References

The post Automate Text Summarization with OpenAI and PowerShell appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/automate-text-summarization-with-openai-powershell/feed/ 3
Changing your console window title https://devblogs.microsoft.com/powershell-community/changing-console-title/ https://devblogs.microsoft.com/powershell-community/changing-console-title/#comments Tue, 11 Jul 2023 14:22:02 +0000 https://devblogs.microsoft.com/powershell-community/?p=1053 This post shows how to change the title of your console terminal window.

The post Changing your console window title appeared first on PowerShell Community.

]]>
As our skill as a PowerShell developer grows, and the complexity of our scripts increase, we start incorporating new elements to improve the user experience. That might include changing fonts, the background color, or the console window title. This task was already discussed in a blog post from 2004, Can I Change the Command Window Title When Running a Script?. However, the post uses VB script, and changes the title if you are willing to open a new console. Today we learn how to do it with PowerShell, using the same window.

Methods

We will explore two ways of changing the console window title.

  • The $Host automatic variable.
  • Console virtual terminal sequences.

The $Host automatic variable

This variable contains an object that represents the current host application for PowerShell. This object contains a property called $Host.UI.RawUI that allows us to change various aspects of the current PowerShell host, including the window title. Here is how we do it.

$Host.UI.RawUI.WindowTitle = 'MyCoolWindowTitle!'

And with just a property value change our window title changed.

RawUI.WindowTitle

For as simple and straight forward the previous method is, there is something to keep in mind. The $Host automatic variable is host dependent.

Virtual terminal sequences

Console virtual terminal sequences are control character sequences that can control various aspects of the console when written to the output stream. The terminal sequences are intercepted by the console host when written into the output stream. To see all sequences, and more in-depth examples go to the Microsoft documentation page. Virtual terminal sequences are preferred because they follow a well-defined standard, and are fully documented. The window title is limited to 255 characters.

To change the window title the sequence is ESC]0;<string><ST> or ESC]2;<string><ST>, where

  • ESC is character 0x1B.
  • <ST> is the string terminator, which in this case is the “Bell” character 0x7.

The bell character can also be used with the escape sequence \a. Here is how we change a console window title with virtual terminal sequences.

$title = 'Title with terminal sequences!'

Write-Host "$([char]0x1B)]0;$title$([char]0x7)"

# Using the escape sequence.
Write-Host "$([char]0x1B)]0;$title`a"

Conclusion

PowerShell is a versatile tool that often provides multiple ways of achieving the same goal. I hope you had as much fun reading as I had writing. See you in the next one.

Happy scripting!

Useful links:

Test our PowerShell module:

The post Changing your console window title appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/changing-console-title/feed/ 2
Measuring average download time https://devblogs.microsoft.com/powershell-community/measuring-download-time/ https://devblogs.microsoft.com/powershell-community/measuring-download-time/#comments Tue, 27 Jun 2023 18:13:36 +0000 https://devblogs.microsoft.com/powershell-community/?p=1042 This post shows how to measure average download time with PowerShell

The post Measuring average download time appeared first on PowerShell Community.

]]>
One of the most overlooked roles of a systems administrator is to be able to troubleshoot network issues. How many times had you been in a situation where your servers are problematic, and someone asked you to check the network connectivity? One of the steps is checking downloading time, and speed, and although there are countless tools available, today we will learn how to do it natively, with PowerShell.

Methods

We will focus on three methods, ranging from the easiest to the most complex, and discuss their pros and cons. These methods are the Start-BitsTransfer Cmdlet, using .NET with the System.Net namespace, and using the Windows native API.

Start-BitsTransfer

BITS, or Background Intelligent Transfer Service is a Windows service that manages content transfer using HTTP or SMB. It was designed to manage the many aspects of content transfer, including cost, speed, priority, etc. For us, it also serves as an easy way of downloading files. Here is how you download a file from a web server using BITS:

$startBitsTransferSplat = @{
    Source = 'https://www.contoso.com/Files/BitsDefinition.txt'
    Destination = 'C:\BitsDefinition.txt'
}
Start-BitsTransfer @startBitsTransferSplat

Another great advantage of BITS is that it shows progress, which can be useful while downloading big files. In our case however, we want to know how long does it take to download a file. For this we will use a handy object of type System.Diagnostics.Stopwatch.

$stopwatch = [System.Diagnostics.Stopwatch]::new()

$stopwatch.Start()
$startBitsTransferSplat = @{
    Source = 'https://www.contoso.com/Files/BitsDefinition.txt'
    Destination = 'C:\BitsDefinition.txt'
}
Start-BitsTransfer @startBitsTransferSplat
$stopwatch.Stop()

Write-Output $stopwatch.Elapsed
Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 816
Ticks             : 8165482
TotalDays         : 9.45078935185185E-06
TotalHours        : 0.000226818944444444
TotalMinutes      : 0.0136091366666667
TotalSeconds      : 0.8165482
TotalMilliseconds : 816.5482

Awesome, we now have a baseline to build our script upon. First thing we will change is the file. Since we are more interested on the speed we can use temporary files to download. That also gives us the opportunity of cleaning up at the end. For this we will use a static method from System.IO.Path called GetTempFileName. Other thing we must think is on running the test a number of times, and calculating the average, this way we have more reliable results.

# Changing the progress preference to hide the progress bar.
$ProgressPreference = 'SilentlyContinue'
$payloadUrl = 'https://www.contoso.com/Files/BitsDefinition.txt'
$stopwatch = New-Object -TypeName 'System.Diagnostics.Stopwatch'
$elapsedTime = [timespan]::Zero
$iterationNumber = 3

# Here we are using a foreach loop with a range,
# but this can also be accomplished with a for loop.
foreach ($iteration in 1..$iterationNumber) {
    $tempFilePath = [System.IO.Path]::GetTempFileName()

    $stopwatch.Restart()
    Start-BitsTransfer -Source $payloadUrl -Destination $tempFilePath
    $stopwatch.Stop()

    Remove-Item -Path $tempFilePath
    $elapsedTime = $elapsedTime.Add($stopwatch.Elapsed)
}

# Timespan.Divide is not available on .NET Framework.
if ($PSVersionTable.PSVersion -ge [version]'6.0') {
    $average = $elapsedTime.Divide($IterationNumber)
} else { $
    average = [timespan]::new($elapsedTime.Ticks / $IterationNumber)
}

return $average

Great, now we can run the test as many times as we want and get consistent results. This looping system will also serve as a skeleton for the other methods we will try.

System.Net.HttpWebRequest

Using Start-BitsTransfer is great because it’s easy to set up, however is not the most efficient way. BITS transfers have some overhead involved to start, maintain and cleanup jobs, manage throttling, etc. If we want to keep our results as true as possible we need to go down in the abstraction level. This method uses the following workflow:

  • Creates a request to the destination URI.
  • Gets the response, and response stream.
  • Creates the temporary file by opening a file stream.
  • Downloads the binary data, and writes in the file stream.
  • Closes the request, and file streams.

Here is what this implementation looks like:

$uri = [uri]'https://www.contoso.com/Files/BitsDefinition.txt'

$stopwatch = [System.Diagnostics.Stopwatch]::new()

$request = [System.Net.HttpWebRequest]::Create($uri)

# If necessary you can set the download timeout in milliseconds.
$request.Timeout = 15000

$stopwatch.Restart()

# Receiving the first request, opening a file memory stream, and creating a buffer.
$responseStream = $request.GetResponse().GetResponseStream()
$tempFilePath = [System.IO.Path]::GetTempFileName()

$targetStream = [System.IO.FileStream]::new($tempFilePath, 'Create')

# You can experiment with the size of the byte array to try to get the best performance.
$buffer = [System.Byte[]]::new(10Kb)

# Reading data and writing to the file stream, until there is no more data to read.
do {
    $count = $responseStream.Read($buffer, 0, $buffer.Length)
    $targetStream.Write($buffer, 0, $count)

} while ($count -gt 0)

# Stopping the stopwatch, and storing the elapsed time.
$stopwatch.Stop()

# Disposing of unmanaged resources, and deleting the temp file.
$targetStream.Dispose()
$responseStream.Dispose()

Remove-Item -Path $tempFilePath

return $stopwatch.Elapsed

There are definitely more steps, and more points of failure, so how does it perform against the BITS method? Here are the results of both methods, using the same file and 10 iterations.

BITS:

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 657
Ticks             : 6575274
TotalDays         : 7.61027083333333E-06
TotalHours        : 0.0001826465
TotalMinutes      : 0.01095879
TotalSeconds      : 0.6575274
TotalMilliseconds : 657.5274

HttpWebRequest:

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 315
Ticks             : 3151956
TotalDays         : 3.64809722222222E-06
TotalHours        : 8.75543333333333E-05
TotalMinutes      : 0.00525326
TotalSeconds      : 0.3151956
TotalMilliseconds : 315.1956

Looking good, a little less than half. Now we know we are closer to the real time spent downloading the file. But the question is, if .NET it’s also an abstraction layer, how low can we go? The operating system, of course.

Native

Although there are multiple abstraction layers on the OS itself, there is a user-mode API defined in Winhttp.dll who’s exported functions can be used in PowerShell through Platform Invoke. This means, we need to use C# to create these function signatures in managed .NET. Here is what that code looks like:

namespace Utilities
{
    using System;
    using System.IO;
    using System.Runtime.InteropServices;

    public class WinHttp
    {
        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern IntPtr WinHttpOpen(
            string pszAgentW,
            uint dwAccessType,
            string pszProxyW,
            string pszProxyBypassW,
            uint dwFlags
        );

        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern IntPtr WinHttpConnect(
            IntPtr hSession,
            string pswzServerName,
            uint nServerPort,
            uint dwReserved
        );

        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern IntPtr WinHttpOpenRequest(
            IntPtr hConnect,
            string pwszVerb,
            string pwszObjectName,
            string pwszVersion,
            string pwszReferrer,
            string ppwszAcceptTypes,
            uint dwFlags
        );

        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern bool WinHttpSendRequest(
            IntPtr hRequest,
            string lpszHeaders,
            uint dwHeadersLength,
            IntPtr lpOptional,
            uint dwOptionalLength,
            uint dwTotalLength,
            UIntPtr dwContext
        );

        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern bool WinHttpReceiveResponse(
            IntPtr hRequest,
            IntPtr lpReserved
        );

        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern bool WinHttpQueryDataAvailable(
            IntPtr hRequest,
            out uint lpdwNumberOfBytesAvailable
        );

        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern bool WinHttpReadData(
            IntPtr hRequest,
            IntPtr lpBuffer,
            uint dwNumberOfBytesToRead,
            out uint lpdwNumberOfBytesRead
        );

        [DllImport("Winhttp.dll", SetLastError = true, CharSet = CharSet.Unicode)]
        public static extern bool WinHttpCloseHandle(IntPtr hInternet);
    }
}

Then we can use Add-Type to compile, and import this type in PowerShell.

Add-Type -TypeDefinition (Get-Content -Path 'C:\WinHttpHelper.cs' -Raw)

After that, the method is similar to the .NET one, with a few more steps. It makes sense being alike, because at some point .NET will call a Windows API. Note that Winhttp.dll is not the only API that can be used to download files. This is what the PowerShell code looks like:

$stopwatch = New-Object -TypeName 'System.Diagnostics.Stopwatch'

# Here we open a WinHttp session, connect to the destination host,
#and open a request to the file.
$hSession = [Utilities.WinHttp]::WinHttpOpen('NativeDownload', 0, '', '', 0)
$hConnect = [Utilities.WinHttp]::WinHttpConnect($hSession, $Uri.Host, 80, 0)
$hRequest = [Utilities.WinHttp]::WinHttpOpenRequest(
    $hConnect, 'GET', $Uri.AbsolutePath, '', '', '', 0
)

$stopwatch.Start()
# Sending the first request.
$boolResult = [Utilities.WinHttp]::WinHttpSendRequest(
    $hRequest, '', 0, [IntPtr]::Zero, 0, 0, [UIntPtr]::Zero
)
if (!$boolResult) {
    Write-Error 'Failed sending request.'
}
if (![Utilities.WinHttp]::WinHttpReceiveResponse($hRequest, [IntPtr]::Zero)) {
    Write-Error 'Failed receiving response.'
}

# Creating the temp file memory stream.
$tempFilePath = [System.IO.Path]::GetTempFileName()
$fileStream = [System.IO.FileStream]::new($tempFilePath, 'Create')

# Reading data until there is no more data available.
do {
    # Querying if there is data available.
    $dwSize = 0
    if (![Utilities.WinHttp]::WinHttpQueryDataAvailable($hRequest, [ref]$dwSize)) {
        Write-Error 'Failed querying for available data.'
    }

    # Allocating memory, and creating the byte array who will hold the managed data.
    $chunk = New-Object -TypeName "System.Byte[]" -ArgumentList $dwSize
    $buffer = [System.Runtime.InteropServices.Marshal]::AllocHGlobal($dwSize)

    # Reading the data.
    try {
        $boolResult = [Utilities.WinHttp]::WinHttpReadData(
            $hRequest, $buffer, $dwSize, [ref]$dwSize
        )
        if (!$boolResult) {
            Write-Error 'Failed to read data.'
        }

        # Copying the data from the unmanaged pointer to the managed byte array,
        # then ing the data into the file stream.
        [System.Runtime.InteropServices.Marshal]::Copy($buffer, $chunk, 0, $chunk.Length)
        $fileStream.Write($chunk, 0, $chunk.Length)
    }
    finally {
        # Freeing the unmanaged memory.
        [System.Runtime.InteropServices.Marshal]::FreeHGlobal($buffer)
    }

} while ($dwSize -gt 0)
$stopwatch.Stop()

# Closing the unmanaged handles.
[void][Utilities.WinHttp]::WinHttpCloseHandle($hRequest)
[void][Utilities.WinHttp]::WinHttpCloseHandle($hConnect)
[void][Utilities.WinHttp]::WinHttpCloseHandle($hSession)

# Disposing of the file stream will close the file handle, which will allow us
# to manage the file later.
$fileStream.Dispose()
$fileStream.Dispose()

Remove-Item -Path $tempFilePath

return $stopwatch.Elapsed

Now with all this extra work you might be asking, how does it perform?

HttpWebRequest:

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 281
Ticks             : 2819990
TotalDays         : 3.26387731481481E-06
TotalHours        : 7.83330555555556E-05
TotalMinutes      : 0.00469998333333333
TotalSeconds      : 0.281999
TotalMilliseconds : 281.999

Native:

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 249
Ticks             : 2497170
TotalDays         : 2.89024305555556E-06
TotalHours        : 6.93658333333333E-05
TotalMinutes      : 0.00416195
TotalSeconds      : 0.249717
TotalMilliseconds : 249.717

Wait, that’s almost the same thing, why is that? We are calling the OS API directly! Well, we are, but we are managing everything from PowerShell, while .NET is using compiled code, from a library. So what if we add all the request work in our C# code, and use it as a method? Here’s what said method looks like:

public static string NativeDownload(Uri uri)
{
    IntPtr hInternet = WinHttpOpen("NativeFileDownloader", 0, "", "", 0);
    if (hInternet == IntPtr.Zero)
        throw new SystemException(Marshal.GetLastWin32Error().ToString());

    IntPtr hConnect = WinHttpConnect(hInternet, uri.Host, 443, 0);
    if (hConnect == IntPtr.Zero)
        throw new SystemException(Marshal.GetLastWin32Error().ToString());

    IntPtr hReq = WinHttpOpenRequest(hConnect, "GET", uri.AbsolutePath, "", "", "", 0);
    if (hReq == IntPtr.Zero)
        throw new SystemException(Marshal.GetLastWin32Error().ToString());

    if (!WinHttpSendRequest(hReq, "", 0, IntPtr.Zero, 0, 0, UIntPtr.Zero))
        throw new SystemException(Marshal.GetLastWin32Error().ToString());

    if (!WinHttpReceiveResponse(hReq, IntPtr.Zero))
        throw new SystemException(Marshal.GetLastWin32Error().ToString());

    string tempFilePath = Path.GetTempFileName();
    FileStream fileStream = new FileStream(tempFilePath, FileMode.Create);
    uint dwBytes;
    do
    {
        if (!WinHttpQueryDataAvailable(hReq, out dwBytes))
            throw new SystemException(Marshal.GetLastWin32Error().ToString());

        byte[] chunk = new byte[dwBytes];
        IntPtr buffer = Marshal.AllocHGlobal((int)dwBytes);
        try
        {
            if (!WinHttpReadData(hRequest, buffer, dwBytes, out _))
                throw new SystemException(Marshal.GetLastWin32Error().ToString());

            Marshal.Copy(buffer, chunk, 0, chunk.Length);
            fileStream.Write(chunk, 0, chunk.Length);
        }
        finally
        {
            Marshal.FreeHGlobal(buffer);
        }
    } while (dwBytes > 0);

    WinHttpCloseHandle(hReq);
    WinHttpCloseHandle(hConnect);
    WinHttpCloseHandle(hInternet);

    fileStream.Dispose();

    return tempFilePath;
}

The results:

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 191
Ticks             : 1917438
TotalDays         : 2.21925694444444E-06
TotalHours        : 5.32621666666667E-05
TotalMinutes      : 0.00319573
TotalSeconds      : 0.1917438
TotalMilliseconds : 191.7438

And there we go, a slighter faster download, is the small improvement worth all the extra work? I say yes, that gives us the opportunity to expand our Operating System knowledge.

Bonus

Before we wrap up, we have calculated the average time, but what about the speed? How can my script be as cool as those internet speed measuring websites? Well, We have the time, all we need is the file size, and we can calculate the speed:

$uri = [uri]'https://www.contoso.com/Files/BitsDefinition.txt'

# Getting the total file size in bytes.
$totalSizeBytes = [System.Net.HttpWebRequest]::Create($uri).GetResponse().ContentLength

# Elapsed time here is the result of the previous methods.
if ($Host.Version -ge [version]'6.0') { $average = $elapsedTime.Divide($IterationNumber) }
else { $average = [timespan]::new($elapsedTime.Ticks / $IterationNumber) }

# Calculating the speed in Bytes/second
$bytesPerSecond = $totalSizeBytes / $average.TotalSeconds

# Creating an output string based on the B/s result.
switch ($bytesPerSecond) {
    { $_ -gt 99 } { $speed = "$([Math]::Round($bytesPerSecond / 1KB, 2)) Kb/s" }
    { $_ -gt 101376 } { $speed = "$([Math]::Round($bytesPerSecond / 1MB, 2)) Mb/s" }
    { $_ -gt 103809024 } { $speed = "$([Math]::Round($bytesPerSecond / 1GB, 2)) Gb/s" }
    { $_ -gt 106300440576 } { $speed = "$([Math]::Round($bytesPerSecond / 1TB, 2)) Tb/s" }
    Default { $speed = "$([Math]::Round($bytesPerSecond, 2)) B/s" }
}

return [PSCustomObject]@{
    Speed = $speed
    TimeSpan = $average
}
Speed    TimeSpan
-----    --------
3.6 Mb/s 00:00:00.2070106

Conclusion

If you got to this point I hope you had as much fun as I did. You can find all the code we wrote in my GitHub page.

Until the next one, happy scripting!

The post Measuring average download time appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/measuring-download-time/feed/ 2
Measuring script execution time https://devblogs.microsoft.com/powershell-community/measuring-script-execution-time/ Mon, 15 May 2023 15:56:56 +0000 https://devblogs.microsoft.com/powershell-community/?p=1007 This post shows how to measure script execution time in PowerShell

The post Measuring script execution time appeared first on PowerShell Community.

]]>
Most of the time while developing PowerShell scripts we don’t need to worry about performance, or execution time. After all, scripts were made to run automation in the background. However, as your scripts become more sophisticated, and you need to work with complex data or big data sizes, performance becomes something to keep in mind. Measuring a script execution time is the first step towards script optimization.

Measure-Command

PowerShell has a built-in cmdlet called Measure-Command, which measures the execution time of other cmdlets, or script blocks. It has two parameters:

  • Expression: The script block to be measured.
  • InputObject: Optional input to be passed to the script block. You can use $_ or $PSItem to access them.

Besides the two parameters, objects in the pipeline are also passed to the script block. Measure-Command returns an object of type System.TimeSpan, giving us more flexibility on how to work with the result.

Measure-Command { foreach ($number in 1..1000) { <# Do work #> } }
Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 8
Ticks             : 85034
TotalDays         : 9.84189814814815E-08
TotalHours        : 2.36205555555556E-06
TotalMinutes      : 0.000141723333333333
TotalSeconds      : 0.0085034
TotalMilliseconds : 8.5034

Using the pipeline or the InputObject parameter.

1..1000 |
    Measure-Command -Expression { foreach ($number in $_) { <# Do work #> } } |
    Select-Object TotalMilliseconds
TotalMilliseconds
-----------------
            10.60
Measure-Command -InputObject (1..1000) -Expression { $_ | % { <# Do work #> } } |
    Select-Object TotalMilliseconds
TotalMilliseconds
-----------------
            19.98

Scope and Object Modification

Measure-Command runs the script block in the current scope, meaning variables in the current scope gets modified if referenced in the script block.

$studyVariable = 0
Measure-Command { 1..10 | % { $studyVariable += 1 } }
Write-Host "Current variable value: $studyVariable."
Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 15
Ticks             : 155838
TotalDays         : 1.80368055555556E-07
TotalHours        : 4.32883333333333E-06
TotalMinutes      : 0.00025973
TotalSeconds      : 0.0155838
TotalMilliseconds : 15.5838

Current variable value: 10.

To overcome this, you can use the invocation operator & and enclose the script block in {}, to execute in a separate context.

$studyVariable = 0
Measure-Command { & { 1..10 | % { $studyVariable += 1 } } }
Write-Host "Current variable value: $studyVariable."
Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 8
Ticks             : 86542
TotalDays         : 1.00164351851852E-07
TotalHours        : 2.40394444444444E-06
TotalMinutes      : 0.000144236666666667
TotalSeconds      : 0.0086542
TotalMilliseconds : 8.6542

Current variable value: 0.

It’s also worth remember that if your script block modifies system resources, files, databases or any other static data, the object gets modified.

$scriptBlock = {
    if (!(Test-Path -Path C:\SuperCoolFolder)) {
        New-Item -Path C:\ -Name SuperCoolFolder -ItemType Directory
    }
}

Measure-Command -Expression { & $scriptBlock }
Get-ChildItem C:\ -Filter SuperCoolFolder | Select-Object FullName
Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 11
Ticks             : 118978
TotalDays         : 1.37706018518519E-07
TotalHours        : 3.30494444444444E-06
TotalMinutes      : 0.000198296666666667
TotalSeconds      : 0.0118978
TotalMilliseconds : 11.8978

FullName : C:\SuperCoolFolder

As a cool exercise, try figuring out why the output from New-Item didn’t show up.

Output and Alternatives

Measure-Command returns a System.TimeSpan object, but not the result from the script. If your study also includes the result, there are two ways you can go about it.

Saving the output in a variable

We know that scripts executed with Measure-Object runs in the current scope. So we could assign the result to a variable, and work with it.

$range = 1..100
$evenCount = 0
$scriptBlock = {
    foreach ($number in $range) {
        if ($number % 2 -eq 0) {
            $evenCount++
        }
    }
}

Measure-Command -InputObject (1..100) -Expression $scriptBlock |
    Format-List TotalMilliseconds
Write-Host "The count of even numbers in 1..100 is $evenCount."
TotalMilliseconds : 1.3838

The count of even numbers in 1..100 is 50.

Custom Function

If you are serious about the performance variable, and want to keep the script block as clean as possible, we could elaborate our own function, and shape the output as we want.

The Measure-Command Cmdlet uses an object called System.Diagnostics.Stopwatch. It works like a real stopwatch, and we control it using its methods, like Start(), Stop(), etc. All we need to do is start it before executing our script block, stop it after execution finishes, and collect the result from the Elapsed property.

function Measure-CommandEx {

    [CmdletBinding()]
    param (
        [Parameter(Mandatory, Position = 0)]
        [scriptblock]$Expression,

        [Parameter(ValueFromPipeline)]
        [psobject[]]$InputObject
    )

    Begin {
        $stopWatch = New-Object -TypeName 'System.Diagnostics.Stopwatch'

        <#
            We need to define result as a list because the way objects
            are passed to the pipeline. If you pass a collection of objects,
            the pipeline sends them one by one, and the result
            is always overridden by the last item.
        #>
        [System.Collections.Generic.List[PSObject]]$result = @()
    }

    Process {
        if ($InputObject) {

            # Starting the stopwatch.
            $stopWatch.Start()

            # Creating the '$_' variable.
            $dollarUn = New-Object -TypeName psvariable -ArgumentList @('_', $InputObject)

            <#
                Overload is:
                    InvokeWithContext(
                        Dictionary<string, scriptblock> functionsToDefine,
                        List<psvariable> variablesToDefine,
                        object[] args
                    )
            #>
            $result.AddRange($Expression.InvokeWithContext($null, $dollarUn, $null))

            $stopWatch.Stop()
        }
        else {
            $stopWatch.Start()
            $result.AddRange($Expression.InvokeReturnAsIs())
            $stopWatch.Stop()
        }
    }

    End {
        return [PSCustomObject]@{
            ElapsedTimespan = $stopWatch.Elapsed
            Result = $result
        }
    }
}

Note that there is overhead when using the InputObject parameter, meaning there is a difference in the overall execution time.

Conclusion

I hope you, like me, learned something new today, and had fun along the way.

Until a next time, happy scripting!

Links

The post Measuring script execution time appeared first on PowerShell Community.

]]>
Porting System.Web.Security.Membership.GeneratePassword() to PowerShell https://devblogs.microsoft.com/powershell-community/porting-system-web-security-membership-generatepassword-to-powershell/ https://devblogs.microsoft.com/powershell-community/porting-system-web-security-membership-generatepassword-to-powershell/#comments Tue, 09 May 2023 15:51:57 +0000 https://devblogs.microsoft.com/powershell-community/?p=994 This post shows how to port a C# method into PowerShell

The post Porting System.Web.Security.Membership.GeneratePassword() to PowerShell appeared first on PowerShell Community.

]]>

I’ve been using PowerShell (core) for a couple of years now, and it became natural to create automations with all the features that are not present in Windows PowerShell. However, there is still one feature I miss in PowerShell, and this feature, for as silly as it sounds, is the GeneratePassword, from System.Web.Security.Membership.

This happens because this assembly was developed in .NET Framework, and not brought to .NET (core). Although there are multiple alternatives to achieve the same result, I thought this is the perfect opportunity to show the Power in PowerShell, and port this method from C#.

Method

We are going to get this method’s code by using an IL decompiler. C# is compiled to an Intermediate Language, which allows us to decompile it. The tool I’ll be using is ILSpy, and can be found on the Microsoft Store.

The code for GeneratePassword and the System.Web library were not written by me, and the purpose of decompiling it is purely educational. For as harmless as this code is, it does not have any security warranties, nor is intended for misuse.

Getting the Code

Once installed, open ILSpy, click on File and Open from GAC…. On the search bar, type System.Web, select the assembly, and click Open.

File menu Open from GAC menu

Once loaded, expand the System.Web assembly tree, and the System.Web.Security namespace. Inside System.Web.Security, look for the Membership class, click on it, and the decompiled code should appear on the right pane.

Membership class

Scroll down until you find the GeneratePassword method, and expand it.

GeneratePassword method

Porting to PowerShell

Now the fun begins. Let’s do this using PowerShell tools only, means we’re not going to copy the Membership class and method. We are going to create a function, and keep the variable names the same, so it’s easier for us to compare.

  • Starting with the method’s signature: public static string GeneratePassword(int lenght, int numberOfNonAlphanumericCharacters)
    • public means this method can be called from outside the assembly.
    • static means I can call this method without having to instantiate an object of type Membership.
    • string means this method returns a string.
  • Utility methods and properties. GeneratePassword uses methods and properties that are also defined in the System.Web library.
    • Methods
    • System.Web.CrossSiteScriptingValidation.IsDangerousString(string s, out int matchIndex)
    • System.Web.CrossSiteScriptingValidation.IsAtoZ(char c)
    • Properties
    • char[] punctuations, from System.Web.Security.Membership
    • char[] startingChars, from System.Web.CrossSiteScriptingValidation

Now enough C#, let get to scripting.

Main function

For this, we are going to use the Advanced Function template, from Visual Studio Code. I’ll name the main function New-StrongPassword, but you can name it as you like, just remember using approved verbs.

This method takes as parameter two integer numbers, let’s create them in the param() block. The first two if statements are checks to ensure both parameters are within acceptable range. We can accomplish the same with parameter attributes.

function New-StrongPassword {

    [CmdletBinding()]
    param (

        # Number of characters.
        [Parameter(
            Mandatory,
            Position = 0,
            HelpMessage = 'The number of characters the password should have.'
        )]
        [ValidateRange(1, 128)]
        [int] $Length,

        # Number of non alpha-numeric chars.
        [Parameter(
            Mandatory,
            Position = 1,
            HelpMessage = 'The number of non alpha-numeric characters the password should contain.'
        )]
        [ValidateScript({
            if ($PSItem -gt $Length -or $PSItem -lt 0) {
                $newObjectSplat = @{
                    TypeName = 'System.ArgumentException'
                    ArgumentList = 'Membership minimum required non alpha-numeric characters is incorrect'
                }
                throw New-Object @newObjectSplat
            }
            return $true
        })]
        [int] $NumberOfNonAlphaNumericCharacters

    )

    begin {

    }

    process {

    }

    end {

    }
}

Utilities

Now let’s focus on the Begin{} block, and create those utility methods, and properties.

Properties

These are the two properties, in our case variables, that we need to create.

private static char[] startingChars = new char[2] { '<', '&' };
private static char[] punctuations = "!@#$%^&*()_-+=[{]};:>|./?".ToCharArray();

Let’s create them as global variables, to be used across our functions if necessary.

[char[]]$global:punctuations = @('!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '_',
                                 '-', '+', '=', '[', '{', ']', '}', ';', ':', '>', '|',
                                 '.', '/', '?')
[char[]]$global:startingChars = @('<', '&')

Get-IsAtoZ

This is what the method looks like:

private static bool IsAtoZ(char c)
{
    if (c < 'a' || c > 'z')
    {
        if (c >= 'A')
        {
            return c <= 'Z';
        }
        return false;
    }
    return true;
}

Pretty simple method, with one parameter, only the operator’s name needs to change. Let’s use an inline function:

function Get-IsAToZ([char]$c) {
    if ($c -lt 'a' -or $c -gt 'z') {
        if ($c -ge 'A') {
            return $c -le 'Z'
        }
        return $false
    }
    return $true
}

Get-IsDangerousString

This is what the C# method looks like:

internal static bool IsDangerousString(string s, out int matchIndex)
{
    matchIndex = 0;
    int startIndex = 0;
    while (true)
    {
        int num = s.IndexOfAny(startingChars, startIndex);
        if (num < 0)
        {
            return false;
        }
        if (num == s.Length - 1)
        {
            break;
        }
        matchIndex = num;
        switch (s[num])
        {
        case '<':
            if (IsAtoZ(s[num + 1]) || s[num + 1] == '!' || s[num + 1] == '/' || s[num + 1] == '?')
            {
                return true;
            }
            break;
        case '&':
            if (s[num + 1] == '#')
            {
                return true;
            }
            break;
        }
        startIndex = num + 1;
    }
    return false;
}

This one is a little more extensive, but it’s pretty much only string manipulation. The interesting part of this method though, is the parameter matchIndex. Note the out keyword, this means this parameter is passed as reference. We could skip this parameter altogether, because is not used in our case, but this is a perfect opportunity to exercise the PSReference type.

function Get-IsDangerousString {

    param([string]$s, [ref]$matchIndex)

    # To access the referenced parameter's value, we use the 'Value' property from PSReference.
    $matchIndex.Value = 0
    $startIndex = 0

    while ($true) {
        $num = $s.IndexOfAny($global:startingChars, $startIndex)
        if ($num -lt 0) {
            return $false
        }
        if ($num -eq $s.Length - 1) {
            break
        }
        $matchIndex.Value = $num

        switch ($s[$num]) {
            '<' {
                if (
                    (Get-IsAToZ($s[$num + 1])) -or
                    ($s[$num + 1] -eq '!')     -or
                    ($s[$num + 1] -eq '/')     -or
                    ($s[$num + 1] -eq '?')
                ) {
                    return $true
                }
            }
            '&' {
                if ($s[$num + 1] -eq '#') {
                    return $true
                }
            }
        }
        $startIndex = $num + 1
    }
    return $false
}

With these, our Begin{} block looks like this:

Begin {
    [char[]]$global:punctuations = @('!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '_',
                                     '-', '+', '=', '[', '{', ']', '}', ';', ':', '>', '|',
                                     '.', '/', '?')
    [char[]]$global:startingChars = @('<', '&')

    function Get-IsAToZ([char]$c) {
        if ($c -lt 'a' -or $c -gt 'z') {
            if ($c -ge 'A') {
                return $c -le 'Z'
            }
            return $false
        }
        return $true
    }

    function Get-IsDangerousString {

        param([string]$s, [ref]$matchIndex)

        $matchIndex.Value = 0
        $startIndex = 0

        while ($true) {
            $num = $s.IndexOfAny($global:startingChars, $startIndex)
            if ($num -lt 0) {
                return $false
            }
            if ($num -eq $s.Length - 1) {
                break
            }
            $matchIndex.Value = $num

            switch ($s[$num]) {
                '<' {
                    if (
                        (Get-IsAToZ($s[$num + 1])) -or
                        ($s[$num + 1] -eq '!')     -or
                        ($s[$num + 1] -eq '/')     -or
                        ($s[$num + 1] -eq '?')
                    ) {
                        return $true
                    }
                }
                '&' {
                    if ($s[$num + 1] -eq '#') {
                        return $true
                    }
                }
            }
            $startIndex = $num + 1
        }
        return $false
    }
}

Main Function Body

In this stage we build the function itself. Since we’re using attributes to check the parameters, the first two if statements are ignored. After that, we have a single do-while loop. In this loop, we are going to use tools from the System.Security.Cryptography library, so let’s import it.

Add-Type -AssemblyName System.Security.Cryptography

# If you get 'Assembly cannot be found' errors, load it with partial name instead.
[void][System.Reflection.Assembly]::LoadWithPartialName('System.Security.Cryptography')

First let’s declare the variables used in the main function body, and inside the main loop. This gives us the opportunity to analyze our choices.

# Explicitly declaring the output 'text' to match the method. We can skip this delaration.
# Same for the 'matchIndex'
$text = [string]::Empty
$matchIndex = 0
do {
    $array = New-Object -TypeName 'System.Byte[]' -ArgumentList $Length
    $array2 = New-Object -TypeName 'System.Char[]' -ArgumentList $Length
    $num = 0

    # This stage could be done in 3 ways. We could use 'New-Object' and imediately call
    # 'GetBytes' on it, we could use the class constructor directly, and call 'GetBytes'
    # on it: [System.Security.Cryptography.RNGCryptoServiceProvider]::new().GetBytes(),
    # or we could instantiate the 'RNGCryptoServiceProvider' object using one of the
    # previous methods, and call 'GetBytes' on it. Since we're using PowerShell tools the
    # most we can, and we want to stay true to the method, let's use the first option.
    # [void] used to suppress output.
    [void](New-Object -TypeName 'System.Security.Cryptography.RNGCryptoServiceProvider').GetBytes($array)

    # Note that when passing a variable as reference to a function parameter, we need to
    # cast it to 'PSReference'. The parentheses are necessary so the parameter uses the
    # object, and not use it as a string.
} while ((Get-IsDangerousString -s $text -matchIndex ([ref]$matchIndex)))

Note that in our pursuit to stay true to the method’s layout, we are including extra declarations. Although this could be avoided, in some cases it helps with script readability. Plus, if you have experience with any programming language, this will feel familiar.

Right after that, we have a for loop, which will choose each character for our password. It does this with a series of mathematical operations, and comparisons.

for ($i = 0; $i -lt $Length; $i++) {
    $num2 = [int]$array[$i] % 87
    if ($num2 -lt 10) {
        $array2[$i] = [char](48 + $num2)
        continue
    }
    if ($num2 -lt 36) {
        $array2[$i] = [char](65 + $num2 - 10)
        continue
    }
    if ($num2 -lt 62) {
        $array2[$i] = [char](97 + $num2 - 36)
        continue
    }
    $array2[$i] = $global:punctuations[$num2 - 62]
    $num++
}

The next session is going to manage our number of non-alphanumeric characters. It does that by generating random symbol characters and replacing values in the array we filled in the previous loop.

if ($num -lt $NumberOfNonAlphaNumericCharacters) {
    $random = New-Object -TypeName 'System.Random'

    # Generating only the characters left to complete our parameter specification.
    for ($j = 0; $j -lt $NumberOfNonAlphaNumericCharacters - $num; $j++) {
        $num3 = 0
        do {
            $num3 = $random.Next(0, $Length)
        } while (![char]::IsLetterOrDigit($array2[$num3]))
        $array2[$num3] = $global:punctuations[$random.Next(0, $global:punctuations.Length)]
    }
}

Now all that’s left is to create a string from the character array, and check if it’s safe with Get-IsDangerousString.

$text = [string]::new($array2)

If our text is safe, we return it and the function reaches end of execution. Our finished function looks like this:

function New-StrongPassword {

    [CmdletBinding()]
    param (

        # Number of characters.
        [Parameter(
            Mandatory,
            Position = 0,
            HelpMessage = 'The number of characters the password should have.'
        )]
        [ValidateRange(1, 128)]
        [int] $Length,

        # Number of non alpha-numeric chars.
        [Parameter(
            Mandatory,
            Position = 1,
            HelpMessage = 'The number of non alpha-numeric characters the password should contain.'
        )]
        [ValidateScript({
            if ($PSItem -gt $Length -or $PSItem -lt 0) {
                $newObjectSplat = @{
                    TypeName = 'System.ArgumentException'
                    ArgumentList = 'Membership minimum required non alpha-numeric characters is incorrect'
                }
                throw New-Object @newObjectSplat
            }
        })]
        [int] $NumberOfNonAlphaNumericCharacters

    )

    Begin {
        [char[]]$global:punctuations = @('!', '@', '#', '$', '%', '^', '&', '*', '(', ')', '_',
                                         '-', '+', '=', '[', '{', ']', '}', ';', ':', '>', '|',
                                         '.', '/', '?')
        [char[]]$global:startingChars = @('<', '&')

        function Get-IsAToZ([char]$c) {
            if ($c -lt 'a' -or $c -gt 'z') {
                if ($c -ge 'A') {
                    return $c -le 'Z'
                }
                return $false
            }
            return $true
        }

        function Get-IsDangerousString {

            param([string]$s, [ref]$matchIndex)

            $matchIndex.Value = 0
            $startIndex = 0

            while ($true) {
                $num = $s.IndexOfAny($global:startingChars, $startIndex)
                if ($num -lt 0) {
                    return $false
                }
                if ($num -eq $s.Length - 1) {
                    break
                }
                $matchIndex.Value = $num

                switch ($s[$num]) {
                    '<' {
                        if (
                            (Get-IsAToZ($s[$num + 1])) -or
                            ($s[$num + 1] -eq '!')     -or
                            ($s[$num + 1] -eq '/')     -or
                            ($s[$num + 1] -eq '?')
                        ) {
                            return $true
                        }
                    }
                    '&' {
                        if ($s[$num + 1] -eq '#') {
                            return $true
                        }
                    }
                }
                $startIndex = $num + 1
            }
            return $false
        }
    }

    Process {
        Add-Type -AssemblyName 'System.Security.Cryptography'

        $text = [string]::Empty
        $matchIndex = 0
        do {
            $array = New-Object -TypeName 'System.Byte[]' -ArgumentList $Length
            $array2 = New-Object -TypeName 'System.Char[]' -ArgumentList $Length
            $num = 0
            [void](New-Object -TypeName 'System.Security.Cryptography.RNGCryptoServiceProvider').GetBytes($array)

            for ($i = 0; $i -lt $Length; $i++) {
                $num2 = [int]$array[$i] % 87
                if ($num2 -lt 10) {
                    $array2[$i] = [char](48 + $num2)
                    continue
                }
                if ($num2 -lt 36) {
                    $array2[$i] = [char](65 + $num2 - 10)
                    continue
                }
                if ($num2 -lt 62) {
                    $array2[$i] = [char](97 + $num2 - 36)
                    continue
                }
                $array2[$i] = $global:punctuations[$num2 - 62]
                $num++
            }

            if ($num -lt $NumberOfNonAlphaNumericCharacters) {
                $random = New-Object -TypeName 'System.Random'

                for ($j = 0; $j -lt $NumberOfNonAlphaNumericCharacters - $num; $j++) {
                    $num3 = 0
                    do {
                        $num3 = $random.Next(0, $Length)
                    } while (![char]::IsLetterOrDigit($array2[$num3]))
                    $array2[$num3] = $global:punctuations[$random.Next(0, $global:punctuations.Length)]
                }
            }

            $text = [string]::new($array2)
        } while ((Get-IsDangerousString -s $text -matchIndex ([ref]$matchIndex)))
    }

    End {
        return $text
    }
}

Result

Now all that’s left is to call our function:

New-StrongPassword

Conclusion

I hope you had as much fun as I had building this function. With this new skill, you can improve your scripts’ complexity and reliability. This also makes you more comfortable to write your own modules, binary or not.

Thank you for going along.

Happy scripting!

Links

The post Porting System.Web.Security.Membership.GeneratePassword() to PowerShell appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/porting-system-web-security-membership-generatepassword-to-powershell/feed/ 2
Designing PowerShell For End Users https://devblogs.microsoft.com/powershell-community/designing-powershell-for-end-users/ https://devblogs.microsoft.com/powershell-community/designing-powershell-for-end-users/#comments Tue, 02 May 2023 13:57:27 +0000 https://devblogs.microsoft.com/powershell-community/?p=981 This posts explains taking user experience into account when designing PowerShell tools

The post Designing PowerShell For End Users appeared first on PowerShell Community.

]]>
PowerShell, being built on .NET and object-oriented in nature, is a fantastic language for developing tooling that you can deliver to your end users. These may be fellow technologists, or they could also be non-technical users within your organization. This could also be a tool you wish to share with the community, either via your own GitHub or by publishing to the PowerShell Gallery.

What Are You Doing?

When setting out with the task of developing a tool you should, as a first step, stop and think. Think about what problem your tool is trying to solve. This could be a number of things

  • Creating data
  • Collating data
  • Interacting with a system or systems

The sky is the limit here, but your first thing is to determine what it is that you are trying to accomplish.

What Should You Call It?

Your second step should be to consider your tool’s name. Whether this is a single function, or a series of functions that form a new module, you should consider the following:

  • Use approved verbs for Functions. You can run Get-Verb in your console to quickly get a list! Tip: Use Get-Verb | Sort-Object to make this easier to parse!
  • Use a coherent noun. Be as specific as possible. Using a great combination of verb/noun syntax provides clarity to what your tool does.

Designing Parameters

This step could take some time, and a little trial and error. You want your tool to be flexible, but you don’t want your parameter names to be so difficult such that they are hard to use/remember. Succinct is better here. If you need to add some flexibility to your tool, consider using ParameterSets. These will give your end users a few different ways to use your tool, if that is or becomes necessary in the future.

Applying Guardrails

Guardrails, in this context, refers to the application of restrictions upon your parameters. These prevent your end users from passing incorrect input to the tool you’ve provided them. Given that PowerShell is built on .NET, there is a ton of flexibility and strength in the guardrails you can employ.

I’ll touch on just a few of my favorites, but this is by far not an exhaustive list.

1. ValidateSet

Let’s look at an example first:

[CmdletBinding()]
Param(
    [Parameter()]
    [ValidateSet('Cat','Dog','Fish','Bird')]
    [String]
    $Animal
)

If you notice above, we’ve defined a non-mandatory parameter that is of type [String]. This is a guardrail because any other type causes an error to be thrown. We have added further restrictions (guardrails) on this parameter by employing a [ValidateSet()] attribute, which limits the valid input to only those items that are a member of the set. Provide Horse to the animal parameter and, even though it is a string, it produces an error because it’s not a member of the approved set of inputs.

2. ValidateRange

We’ll start with another example:

[CmdletBinding()]
Param(
    [Parameter()]
    [ValidateRange(2005,2023)]
    [Int]
    $Year
)

In this example we have defined a Year parameter that is an [Int], meaning only numbers are valid input. We’ve applied guardrails via [ValidateRange()], which limits the input to between 2005 and 2023. Any number outside of that range produces an error.

3. ValidateScript

The [ValidateScript()] attribute is extremely powerful. It allows you to run arbitrary PowerShell code in a script block to check the input of a given parameter. Let’s check out a very simple example:

[CmdletBinding()] 
Param( 
    [Parameter()]
    [ValidateScript({ Test-Path $_ })]
    [String]
    $InputFile
)

By using Test-Path $_ in the Scriptblock of our [ValidateScript()] attribute we are instructing PowerShell to confirm that the input we have provided to the parameter actually exists (Notice the addition of {} here). This helps by putting guardrails around human error in the form of typos.

Wrapping It Up

As previously stated, adding guardrails to your tools using these methods (and countless others not mentioned) demonstrably increases the usability and adoption of your tools.

So take a step back, think about your tool’s design first, and then start writing the code. I think you’ll find that it is a much more enjoyable experience, from creation to adoption.

The post Designing PowerShell For End Users appeared first on PowerShell Community.

]]>
https://devblogs.microsoft.com/powershell-community/designing-powershell-for-end-users/feed/ 4