Quantcast
Channel: a sysadmin'z hard dayz
Viewing all 91 articles
Browse latest View live

How to flush / empty / purge / resize your System Volume Information folder

$
0
0
vssadmin list shadowstorage
[....]
Shadow Copy Storage association                                                         
For volume: (F:)\\?\Volume{7df38471-635b-11e4-9415-000c29dbe934}\                    
Shadow Copy Storage volume: (F:)\\?\Volume{7df38471-635b-11e4-9415-000c29dbe934}\    
Used Shadow Copy Storage space: 249 GB (41%)                                         
Allocated Shadow Copy Storage space: 251 GB (41%)                                    
Maximum Shadow Copy Storage space: UNBOUNDED (2863325530%)
[...]
                        
vssadmin resize shadowstorage /on=F: /For=F: /Maxsize=40GB 
Successfully resized the shadow copy storage association  
  

Incremental back up vhdx files of Hyper-V Virtual Machines hosted on Cluster Shared Volumes to a network share

$
0
0
It's 2015 so why would anyone still use Windows Server 2008 R2? Windows Server Backup in Windows Server 2012 includes great (but limited,see below) support for CSV backup. Some notes and warnings:
  •     Virtual machines hosted on CSV’s cannot be added as part of normal system backup configuration
  •     Windows Server Backup has to be configured on all nodes to ensure that backup and recovery will be available in the event of a failure on one of the nodes in the cluster.
  •     Volumes recovery not supported - can be cheated
  •     Security access control lists are not applicable on CSV file service root. Therefore, file recovery to the root of CSV volume is not supported.
Say you have two Hyper-V hosts, one SAN hosting your VMs' files via Cluster Shared Volumes for your Hyper-V Hosts and you also have four VMs on the Hosts - each has two VMs. You already set up your OS level (e.g. Windows Backup) jobs inside your Hosts OSs and VM OSs and has already been backing up your data to the SAN on dedicated backup LUNs lying on physically separeted fault tolarent arrays. Look sufficient, isn't it.
Actually, it isn't that good. What if your SAN blows up? You lost all your VMs and your VMs' backups at the same time. You also need to have a fool-proof off-site backup and it must be easily handled. Luckily, there is a simple solution without the need to include third party tools, like HVbackup. (which is, anyway, a good one)
Let's say that your first Hyper-V Host server called HOST1 and your VMs running by it are named VM1 and VM2.
So you have a file system on Host1 like this:

as C:\ClusterStorage\Volume4\VM1\ ....

To backup your first virtual guest (with its entire CSV, being on the safe side) on your external backup server share, just execute:

wbadmin start backup -include:C:\ClusterStorage\volume4\ -backuptarget:\\backupserver\vmbackup\vm1

It takes some time:

You can easily restore files from your VM's virtual disk if you find it in the backup:
Then just mount it in your Disk Management (Attach VHD) .....
and then assign a drive letter to it, open your new disk with a file explorer and find your real VD :) inside it. You should repeat above process by attaching this real VD also with your Disk Management console.

In case you need to restore your whole VM (whole means: disaster recovery including all its Hyper-V settings)
Find your backup versions: (if you are lucky enough to have more than a single one)

wbadmin get versions -backuptarget:\\backupserver\vmbackup\vm1

Restoring: (be careful)

wbadmin start recovery -version:02/11/2015-08:25 -backuptarget:\\backupserver\vmbackup\vm1 -itemtype:file -items:C:\ClusterStorage\Volume4\  -recursive -recoverytarget:Z\recover -machine:HOST1

What did I mean when I said backup versions? Have you ever been frustrated that Windows Backup can't maintain multiple versions on a network share? So did I. I've tried to cheat WSB with using a local hardlink pointing out to the network share.
mklink /D M:\MyNetwork \\mybackupserver\vmbackup
and
wbadmin start backup -include:C:\ClusterStorage\volume4\ -backuptarget:\\localhost\d$\MyNetwork\vm1 -quiet
Tadaamm! So far so good.

Unfortunately,
wbadmin get versions -backupTarget:M:\MyNetwork                                               
matter-of-factly answers that it can't be fooled in such a stupid way.

wbadmin 1.0 - Backup command-line tool
(C) Copyright 2013 Microsoft Corporation. All rights reserved.
The backup cannot be completed because the backup storage destination is a shared folder mapped to a drive letter. Use the Universal Naming Convention (UNC) path (\\servername\sharename\) of the backup storage destination instead. 
In short, it sadly won't be versioning, just keeps one full version as usual. Bad luck. Folks say I should use iSCSI based network drives because thats the only way to get WSB versioning. I don't want to bother with this because I already have lots of iSCSI drives from the SAN and I would be a bit afraid of messing up these drives from different sources.

Meanwhile, here are some useful facts from Technet topics to consider about WBS: 
You can also set -vssFULL  parameter in backup jobs but there's not much use in doing so. According to the manual: "If specified, performs a full backup using the Volume Shadow Copy Service (VSS). Each file's history is updated to reflect that it was backed up. If this parameter is not used, wbadmin start backup makes a copy backup, but the history of files being backed up is not updated." In short: "vssfull is only meaningful if there is another 3rd party backup application is being simultaneously used on the same machine along with server backup application and you have application like exchange running on the machine who have vss writers. if that is not the case - it can be ignored and defaults will work fine."
And "All backups after first backup automatically takes incremental storage space on the backup location since changes are tracked using volume shadow copy on the backup location. This incremental storage space is proportional to the changes from the last backup."

How to intall smokeping - the only way it works :)

$
0
0
What is smokeping? That's a powerful network monitoring tool which works mainly with tricky ICMP pings and also able to do special TCP and UDP port connection tests built-in. You can check your statistics on web based graphs.

How to install it? There are blogs that discusses the process but I strongly recommend not to follow them word for word because they suffer from serious errors that keep you from succeeding. Happily you are here, at the perfect place for the perfect tutorial!

What is a master and slave configuration?
Master is actually your central smokeping server. It periodically checks the hosts you configured to monitor. Nothing surprising, ehm ? Let's look at the slave(s) then. They check BACK to the master (or any other configured host) and send their results BACK to the master who process their data and displays the results together with its normal monitoring data.

Okay, let's install my master Debian/Ubuntu node first. I'm going to create two logical units inside my monitoring tree. I'll call the first "External hosts" and the second (guess what) "Internal hosts".

MASTER node
------
apt-get update
apt-get install smokeping
Check if /etc/default/smokeping has only ONE active line: "MODE=master"
cd /etc/smokeping/
touch slave-secrets
You define here your SLAVE servers individual passwords. In my case I will have two slave (also active checking) servers in my "External hosts", see later.
cat /etc/smokeping/slave-secrets
mywebserver:topsecr3t
myftpserver:topsecr3t
echo "topsecr3t"> slavesecrets.conf
chmod 660 slave-secrets
chmod 600 slavesecrets.conf
chown smokeping:www-data slave-secrets slavesecrets.conf
cd config.d/
cat Alerts
*** Alerts ***
to = me.admin@mydomain.com
from = smokeping@mydomain.com

[...others are remain the same default...]
cat Database
*** Database ***

step     = 200
pings    = 100

[...others are remain the same default...]
These two variables are changed because I want to check my hosts in every 200 seconds with 100 ping packets.
cat General
*** General ***

owner    = Me.Da.Admin
contact  = me.admin@mydomain.com
mailhost = localhost
# NOTE: do not put the Image Cache below cgi-bin
# since all files under cgi-bin will be executed ... this is not
# good for images.
cgiurl   = http://localhost/cgi-bin/smokeping.cgi
[...others are remain the same default...]
 cat Probes
*** Probes ***

+ FPing

binary = /usr/bin/fping
packetsize = 500
pings = 100
step = 200
timeout = 1.5
[...others are remain the same default...]
Several other parameters can be used, see later.
 cat Slaves
*** Slaves ***
secrets=/etc/smokeping/slave-secrets

+mywebserver
display_name=My Great webserver
color=ff0000

+myftpserver
display_name=My Super FTP server
color=00b7e2

I've defined here my slave servers. NOT those hosts I want to check. Don't be confused: these two categories are totaly different!
cat Targets
 *** Targets ***
probe = FPing

menu = Top
title = Network Latency Grapher
remark = Welcome to my little SmokePing website.

+ External
menu = External hosts
title = Ext

++ mywebserver
menu = My Superb Webserver
host = 10.243.43.6

++ myftpserver
menu = My gorgeos ftpserver
host = 172.16.29.253

++ mysmokeping
menu = this.server
host = 195.95.95.95
slaves = mywebserver myftpserver

+Internal
menu = Interal hosts
title = Gateways

++ MyGateway
menu = My Little Cisco Switch
host = 172.16.21.254

I've set the most important things here: my monitored hosts. Probe type is simple fping. Two units here: External and Internal. Their friendly name will shown in the web menu as "External hosts" and "Internal hosts". External has 3 hosts inside it: two external servers and the monitor server itself. mywebserver and myftpserver HAVE to be the same string as the servers identifies themselves! (as they answer to the "hostname" shell command) ++mysmokeping section MUST HAVE the "slaves = mywebserver myftpserver" line. If you don't have it, the slaves are going to reply with the unpleasant message
"ERROR: we did not get config from the master. Maybe we are not configured as a slave for any of the targets on the master ?"
/etc/init.d/smokeping restart
If you can't see any useful answer to this :) you may find this command profitable: journalctl -xn

Wait some minutes and point your browser to http://195.95.95.95/smokeping/smokeping.cgi

And! Here is the point for slaves: set your file rights according to the following:
/var/lib/smokeping# ls -sal
[...]
4 drwxrwx---  2 smokeping www-data  4096 Mar  6 13:05 External
cd smokeping/
chown smokeping:www-data *
chmod 755 *
This is a MUST to let the Slaves able to POST their data to apache running on your smokeping master.

SLAVE nodes
-----
apt-get install smokeping
cat /etc/default/smokeping
MODE=slave
MASTER_URL=http://195.95.95.95/cgi-bin/smokeping.cgi
SHARED_SECRET=/etc/smokeping/slavesecrets.conf

Note that this is considerably unsecure configuration. Use VPN connections, firewalls or other type of http authentication,in .htaccess for example. The above 3 lines you have to have, no more or less.
echo "topsecret"> /etc/smokeping/slavesecrets.conf
ls -sal /etc/smokeping/slavesecrets.conf
Set file rights as:
-r--r-----  1 smokeping root   13 Mar  6 07:41 slavesecrets.conf

All the other files are needless here. You can safely delete the whole config.d/ directory for example. Nice, huh?
/etc/init.d/smokeping restart
 Wait some minutes and watch your slave-driven data flow under your "External" session on your Master's webpage.
In case anything going wrong - or nothing, check your apache error log:
cat /var/log/apache/error.log

Check the online manual for further reference.

powershell - check if exists

$
0
0
Taken from various forums... can't remember where from.

do {
$testpath = Test-Path -path \\dns2\d$\test
start-sleep -s 10}
until ($testpath -eq $true)

do {
    sleep -seconds 1
    $mailboxExists = get-mailboxpermission -Identity "CN=$displayName,$DN" -User "NT AUTHORITY\SELF" -ErrorAction SilentlyContinue |fw IsValid
    write-host "." -nonewline
} while (!$mailboxExists)


Import-Module ActiveDirectory
#Import CSV
$csv = @()
$csv = Import-Csv -Path "C:\Temp\bulk_input.csv"

#Get Domain Base
$searchbase = Get-ADDomain | ForEach {  $_.DistinguishedName }

#Loop through all items in the CSV
ForEach ($item In $csv)
{
  #Check if the OU exists
  $check = [ADSI]::Exists("LDAP://$($item.GroupLocation),$($searchbase)")
  
  If ($check -eq $True)
  {
    Try
    {
      #Check if the Group already exists
      $exists = Get-ADGroup $item.GroupName
      Write-Host "Group $($item.GroupName) alread exists! Group creation skipped!"
    }
    Catch
    {
      #Create the group if it doesn't exist
      $create = New-ADGroup -Name $item.GroupName -GroupScope $item.GroupType -Path ($($item.GroupLocation)+","+$($searchbase))
      Write-Host "Group $($item.GroupName) created!"
    }
  }
  Else
  {
    Write-Host "Target OU can't be found! Group creation skipped!"
  }
}

Dell OMSA services installation trouble on Debian 7 (wheezy how to)

$
0
0
root@omsa:~# echo 'deb http://linux.dell.com/repo/community/deb/latest /'> /etc/apt/sources.list.d/linux.dell.com.sources.list
root@omsa:~# apt-get update
Hit http://ftp.hu.debian.org wheezy Release.gpg
Hit http://ftp.hu.debian.org wheezy-updates Release.gpg
Hit http://ftp.hu.debian.org wheezy Release
Hit http://ftp.hu.debian.org wheezy-updates Release
Hit http://ftp.hu.debian.org wheezy/main Sources
Hit http://ftp.hu.debian.org wheezy/main i386 Packages
Hit http://ftp.hu.debian.org wheezy/main Translation-en
Hit http://ftp.hu.debian.org wheezy-updates/main Sources
Hit http://ftp.hu.debian.org wheezy-updates/main i386 Packages/DiffIndex
Hit http://ftp.hu.debian.org wheezy-updates/main Translation-en/DiffIndex
Hit http://security.debian.org wheezy/updates Release.gpg
Hit http://security.debian.org wheezy/updates Release
Hit http://security.debian.org wheezy/updates/main Sources
Hit http://security.debian.org wheezy/updates/main i386 Packages
Hit http://security.debian.org wheezy/updates/main Translation-en
Get:1 http://linux.dell.com  Release.gpg [827 B]
Get:2 http://linux.dell.com  Release [1,392 B]
Ign http://linux.dell.com  Release
Get:3 http://linux.dell.com  Packages [12.2 kB]
Ign http://linux.dell.com  Translation-en_US
Ign http://linux.dell.com  Translation-en
Fetched 14.4 kB in 4s (3,136 B/s)
Reading package lists... Done
W: GPG error: http://linux.dell.com  Release: The following signatures couldn't                                                                                                                    be verified because the public key is not available: NO_PUBKEY 1285491434D8786F
root@omsa:~# gpg --keyserver pool.sks-keyservers.net --recv-key 1285491434D8786F                                                                                                                
gpg: directory `/root/.gnupg' created
gpg: new configuration file `/root/.gnupg/gpg.conf' created
gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run
gpg: keyring `/root/.gnupg/secring.gpg' created
gpg: keyring `/root/.gnupg/pubring.gpg' created
gpg: requesting key 34D8786F from hkp server pool.sks-keyservers.net
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key 34D8786F: public key "Dell Inc., PGRE 2012 (PG Release Engineering Build Group 2012) <PG_Release_Engineering@Dell.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
root@omsa:~#
root@omsa:~# gpg -a --export 1285491434D8786F | apt-key add -
OK
root@omsa:~# apt-get update
Hit http://ftp.hu.debian.org wheezy Release.gpg
Hit http://ftp.hu.debian.org wheezy-updates Release.gpg
Hit http://ftp.hu.debian.org wheezy Release
Hit http://ftp.hu.debian.org wheezy-updates Release
Hit http://ftp.hu.debian.org wheezy/main Sources
Hit http://security.debian.org wheezy/updates Release.gpg
Hit http://ftp.hu.debian.org wheezy/main i386 Packages
Hit http://ftp.hu.debian.org wheezy/main Translation-en
Hit http://ftp.hu.debian.org wheezy-updates/main Sources
Hit http://ftp.hu.debian.org wheezy-updates/main i386 Packages/DiffIndex
Hit http://security.debian.org wheezy/updates Release
Hit http://ftp.hu.debian.org wheezy-updates/main Translation-en/DiffIndex
Hit http://security.debian.org wheezy/updates/main Sources
Hit http://security.debian.org wheezy/updates/main i386 Packages
Hit http://security.debian.org wheezy/updates/main Translation-en
Get:1 http://linux.dell.com  Release.gpg [827 B]
Hit http://linux.dell.com  Release
Hit http://linux.dell.com  Packages
Ign http://linux.dell.com  Translation-en_US
Ign http://linux.dell.com  Translation-en
Fetched 827 B in 4s (187 B/s)
Reading package lists... Done
root@omsa:~#
root@omsa:~# apt-get install srvadmin-omcommon
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation: The following packages have unmet dependencies:
 srvadmin-omcommon : Depends: libssl0.9.8 (>= 0.9.8k-1) but it is not installable
E: Unable to correct problems, you have held broken packages.
root@omsa:~# dpkg -l|grep libssl
ii  libssl1.0.0:i386                     1.0.1e-2+deb7u16              i386         SSL shared libraries
WTF happens
root@omsa:~# wget http://snapshot.debian.org/archive/debian/20110406T213352Z/pool/main/o/openssl098/libssl0.9.8_0.9.8o-7_i386.deb

--2015-04-03 11:55:41--  http://snapshot.debian.org/archive/debian/20110406T213352Z/pool/main/o/openssl098/libssl0.9.8_0.9.8o-7_i386.deb
Resolving snapshot.debian.org (snapshot.debian.org)... 185.17.185.187, 193.62.202.30, 2001:630:206:4000:1a1a:0:c13e:ca1e, ...
Connecting to snapshot.debian.org (snapshot.debian.org)|185.17.185.187|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3090794 (2.9M) [application/x-debian-package]
Saving to: `libssl0.9.8_0.9.8o-7_i386.deb'
100%[=========================================================================================================================================================>] 3,090,794   1.11M/s   in 2.6s

2015-04-03 11:55:44 (1.11 MB/s) - `libssl0.9.8_0.9.8o-7_i386.deb' saved [3090794/3090794]

root@omsa:~# dpkg -i libssl0.9.8_0.9.8o-7_i386.deb
Selecting previously unselected package libssl0.9.8.
(Reading database ... 24865 files and directories currently installed.)
Unpacking libssl0.9.8 (from libssl0.9.8_0.9.8o-7_i386.deb) ...
Setting up libssl0.9.8 (0.9.8o-7) ...

root@omsa:~# apt-get install srvadmin-omcommon
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libsmbios2 python-libsmbios smbios-utils srvadmin-omilcore
Suggested packages:
  libsmbios-doc
The following NEW packages will be installed:
  libsmbios2 python-libsmbios smbios-utils srvadmin-omcommon srvadmin-omilcore
0 upgraded, 5 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,685 kB of archives.
After this operation, 24.5 MB of additional disk space will be used.
Do you want to continue [Y/n]? y
[.....]
Setting up python-libsmbios (2.2.13-0ubuntu4) ...
Setting up smbios-utils (2.2.13-0ubuntu4) ...
Setting up srvadmin-omilcore (7.1.0-3) ...
     **********************************************************
     After the install process completes, you may need
     to log out and then log in again to reset the PATH
     variable to access the Dell OpenManage CLI utilities
     **********************************************************
Setting up srvadmin-omcommon (7.1.0-2) ...
root@omsa:~#

root@omsa:~# apt-get install srvadmin-all
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  libargtable2-0 libasound2 libopenipmi0 libperl5.14 libsensors4 libsnmp-base libsnmp15 libxi6 libxslt1.1 libxtst6 openipmi setserial snmpd srvadmin-base srvadmin-deng srvadmin-hapi
  srvadmin-idrac srvadmin-idrac-ivmcli srvadmin-idrac-vmcli srvadmin-idrac7 srvadmin-idracadm srvadmin-idracadm7 srvadmin-isvc srvadmin-jre srvadmin-megalib srvadmin-omacore
  srvadmin-rac-components srvadmin-rac4 srvadmin-rac4-populator srvadmin-rac5 srvadmin-racadm4 srvadmin-racadm5 srvadmin-racdrsc srvadmin-racsvc srvadmin-smcommon srvadmin-smweb
  srvadmin-storage srvadmin-storageservices srvadmin-storelib srvadmin-storelib-sysfs srvadmin-sysfsutils srvadmin-tomcat srvadmin-webserver srvadmin-xmlsup x11-common
Suggested packages:
  libasound2-plugins lm-sensors snmp-mibs-downloader
The following NEW packages will be installed:
  libargtable2-0 libasound2 libopenipmi0 libperl5.14 libsensors4 libsnmp-base libsnmp15 libxi6 libxslt1.1 libxtst6 openipmi setserial snmpd srvadmin-all srvadmin-base srvadmin-deng
  srvadmin-hapi srvadmin-idrac srvadmin-idrac-ivmcli srvadmin-idrac-vmcli srvadmin-idrac7 srvadmin-idracadm srvadmin-idracadm7 srvadmin-isvc srvadmin-jre srvadmin-megalib srvadmin-omacore
  srvadmin-rac-components srvadmin-rac4 srvadmin-rac4-populator srvadmin-rac5 srvadmin-racadm4 srvadmin-racadm5 srvadmin-racdrsc srvadmin-racsvc srvadmin-smcommon srvadmin-smweb
  srvadmin-storage srvadmin-storageservices srvadmin-storelib srvadmin-storelib-sysfs srvadmin-sysfsutils srvadmin-tomcat srvadmin-webserver srvadmin-xmlsup x11-common
0 upgraded, 46 newly installed, 0 to remove and 0 not upgraded.
Need to get 115 MB of archives.
After this operation, 319 MB of additional disk space will be used.
Do you want to continue [Y/n]?
Get:1 http://ftp.hu.debian.org/debian/ wheezy/main libasound2 i386 1.0.25-4 [463 kB]
Get:2 http://linux.dell.com/repo/community/deb/latest/  srvadmin-deng 7.1.0-3 [1,230 kB]
Get:3 http://ftp.hu.debian.org/debian/ wheezy/main libsensors4 i386 1:3.3.2-2+deb7u1 [53.9 kB]
Get:4 http://ftp.hu.debian.org/debian/ wheezy/main libxi6 i386 2:1.6.1-1+deb7u1 [76.6 kB]
[.......]
Setting up srvadmin-racsvc (7.1.0-3) ...
Setting up srvadmin-rac4-populator (7.1.0-3) ...
Setting up srvadmin-rac-components (7.1.0-2) ...
Setting up srvadmin-racdrsc (7.1.0-2) ...
Setting up srvadmin-rac4 (7.1.0-3) ...
Setting up srvadmin-racadm5 (7.1.0-2) ...
update-alternatives: using /opt/dell/srvadmin/sbin/racadm-wrapper-rac5 to provide /opt/dell/srvadmin/sbin/racadm (racadm) in auto mode
Setting up srvadmin-rac5 (7.1.0-2) ...
Setting up srvadmin-idracadm (7.1.0-2) ...
Setting up srvadmin-idrac-vmcli (7.1.0-2) ...
Setting up srvadmin-idrac-ivmcli (7.1.0-2) ...
Setting up srvadmin-idrac (7.1.0-2) ...
Setting up srvadmin-idracadm7 (7.1.0-2) ...
Setting up srvadmin-idrac7 (7.1.0-2) ...
Setting up srvadmin-all (7.1.0-3) ...
root@omsa:~#

Further reading: http://linux.dell.com/repo/community/deb/latest/
http://serverfault.com/questions/425322/how-to-set-up-dell-omsa-tools-on-debian-6-squeeze-pe2950

Windows 2008 Server R2 hangs on "Preparing to configure Windows. Do not turn off your computer"

$
0
0
Last night I started installing updates on a Windows 2008 R2 box but when I had my dinner and returned to my computer I got pissed off seeing the server stalled at
and I could not RDP into the OS. I spent ca. half an hour waiting for something to happen, then, made a short google search. It turned out that if I connected to this OS via the services.msc console of an other server being in the same domain network I could see that WINDOWS MODULES INSTALLER Service was stuck in Stopping state.
Using the built-in Sysinternals utils:
taskkill /S hostname /IM trustedinstaller.exe
or
sc \\computername queryex TrustedInstaller
taskkill /s computername /pid /f
...I could have been able to terminate that task. According to some internet records if I had done so, my server would have restarted without a hitch. Luckily enough, after entertaining me for more than one hour (-meanwhile actualy doing nothing-) this damn server finally restarted by itself! After the reboot it installed the updates and got ready in less than 5 minutes.
So, in case you are here because you are in the same business as I was, now you'd better have some coffee and wait patiently instead of roughly interfere.

Querying Exchange quota limits in Powershell

$
0
0
Get-mailbox | where {$_.UseDatabaseQuotaDefaults -ne $true} 

Get-MailboxDatabase | Get-MailboxStatistics | Where-Object {$_.StorageLimitStatus -match 'BelowLimit|IssueWarning|ProhibitSend|MailboxDisabled'} 

Get-mailbox | get-mailboxstatistics | sort-object totalitemsize –descending | ft displayname,totalitemsize

Get-PublicFolder -Recurse -ResultSize Unlimited  | Get-PublicFolderStatistics -Server exchange | Select FolderPath, ItemCount, TotalAssociatedItemSize, TotalDeletedItemSize, TotalItemSize | fl

"Verbose" event logging in Windows

$
0
0
The behavior that my Windows 2008 Network Policy Server (aka Radius Server) did not log the successfully authorized usernames always bothered me. Fortunately there is a way to get that stupid habit to work as expected.
Open an elevated command promt and type this to get a list of your event categories and their subcategories:
Auditpol /list /subcategory:* /r  (optional)

Then type: (note that category name strings are localized!)
Auditpol /set /subcategory:"Network Policy Server" /success:enable /failure:enable  
and... backup your policy(ies):
Auditpol /backup /file:C\mypolic.csv  (optional)

Another method to log both Event 6273 and 6279 could be done via a GPO:
Computer Configuration -> Policies -> Windows Settings -> Security Settings -> Advanced Audit Policy Configuration -> Audit Policies -> Logon/Logoff -> Audit Network Policy Server (set both success and failure to enable). Don't forget to gpupdate /force.

Further reading here.


How to assign a group of users to a group of alerts from a group of servers in Zabbix

$
0
0
Configuration - Host groups - Create host group (basic step)
Create your first custom group with a name. If you already have your hosts, here you can add them immediately.

Configuration - Hosts - Create host (basic steps)
Create you first host (if you don't have any ;)).
Host name: if you are using active checks (when zabbix agents connect to your server, less likely) this string has to be identical to that set in your zabbix_agent.conf.
Visible name: can be any name.
Group: add your new host to your custom group.
Agent interfaces: set your host IP address. Don't mess with dns names.
Templates (second tab): Select and add a template to your host, e.g. "Linux servers".


Administration - Users - Create user
Set names, password, etc. Second tab: Media. Add an email address to your user.

 

Third tab: Permissions. This is one of the most annoying things in Zabbix: you can't set any of anything here. But there is a small hint on the bottom, see
 

So, create your second, third etc. user without adding permissions here.

Administration - Users - Create user group 
Group name: set a meaningful name. Add your users to the group. Second tab: Permissions. At long last, you can click on Add and link here your host group to your user group.


Configuration - Actions - Create action
Action is the cause why your users will receive emails.No. Wait. The cause is the trigger the action is linked to. No, wait.... The main cause is the the item that fires the trigger. Ahh, anyway... The default selection is "source: trigger" that is okay. Trust me, you don't want to know what the others are.
Action name: any meaningful name. Don't touch the default subject and message unless you t know what to do here. Second tab: Conditions. That's where the fun begins! 
In New condition Select host group and equals and your actual group.
New condition again. Trigger severity, equal or greater, warning. (modify this according to your needs.)
More filtering: (advanced!). For example: Trigger name, not like, [%string that matches that trigger's name that is high level enough to notify your users but for some reason you don't want to alert them of this certain cause.%]
Quick link to the manual. (See Escalations also, that a neat stuff.)

Third tab: Operations. Add. Operation type: send message. Select your user group. When everything is set, don't forget to click on Add on the left bottom of this page. Then, Save.
Happy zabbixing!

Pfsense, Transparent Squid and Dansguardian

$
0
0

How to set up a transparent Squid (here: http only) proxy with an advanced level security filtering add-in for your local network ?

What is Pfsense? What is a proxy? If you don't know the answer to these questions this is not for you.

1. Install Pfsense
2. Set up your interfaces, default gateway, DNS resolvers or forwarders, etc.
3. Install Squid3 and Dansguardian (at the time of this writing Squidguard is broken in recent Pfsense and won't work with Squid3. In systemlog we can see lots of:
squid[81808]: Squid Parent: (squid-1) process 45089 exited with status 1
squid[81808]: Squid Parent: (squid-1) process 63729 started
(squid-1): The redirector helpers are crashing too rapidly, need help!
and in cache.log:
Shared object "libldap-2.4.so.2" not found, required by "squidGuard"
Shared object "libldap-2.4.so.2" not found, required by "squidGuard"
Shared object "libldap-2.4.so.2" not found, required by "squidGuard"
kid1| WARNING: redirector #Hlpr0 exited
FATAL: The redirector helpers are crashing too rapidly, need help!
So after some hours of struggling I decided to give squidGuard up and switch further. Dansguardian is a more advanced and complex filter system anyway.





4. Setup your (transparent) Squid, for example:
5. Setup your Dansguardian


Remember to edit your regexp URL filters because the default ones will surely block some nice part of your harmless favourite pages. In the log (did you turn logging on?) search for:
[2.2.2-RELEASE][admin@my.proxy.local]/var/log/dansguardian: grep DENIED access.log

6. You need an additional port forwarding rule to get it go because, as you can see, DG listens only on TCP 8080.
That's all. If you don't have any blocking firewall rule, your advanced (but not-yet-tuned) HTTP proxy system works now.

Adding CSVs on Windows 2012 R2 Hyper-V Failover Cluster

$
0
0
In the first part of this article I have added some physical and virtual disk to my Dell iSCSI storage. Of course new vdisks do not appear immediately in the failover cluster manager console.
 
So I opened up my disk management console on my hyper-v host.
 

As you can see a new raw disk appears. We should bring it online, initialize, format and provide the disk a descriptive name.


.. and try to add it again by the FCM console - this time surely with success. But this is going to be only an "available storage" - still needs to be added to the failover role.
 
It's a good practice to rename the new disk to ease further identification and error hunting.

Voila, the new clustered virtual disk is ready to host my new VM's image files, you know the .vhdxs and so on.

Failed Windows Update = Faulty Domain Controller Windows 2012 =Restart loop = Dead Exchange 2013

$
0
0
To be continued



Get-ExchangeServer –Identity <server_name> -Status | FL

set-exchangeserver -identity servername -staticexcludeddomaincontrollers: oldservername

How to change domain controller name that exchange sees

  https://technet.microsoft.com/en-us/library/jj592690.aspx

 
nltest /dsgetsite
DSGetSiteName failed: Status = 1919 0x77f ERROR_NO_SITENAME
nltest /dsgetdc: FQDN of your domain  


From regedit; drill down the following:
HKLM\System\CurrentControlSet\Services\Netlogon\Parameters
Once you click Parameters, add a string word called “SiteName
as written here https://messagingschool.wordpress.com/2014/04/18/dsgetsitename-failed-status-1919-0x77f-installing-exchange-2013-sp1/

 
Get-ClientAccessServer | Test-MRSHealth



-StaticExcludedDomainControllers
https://technet.microsoft.com/en-us/library/dd298163%28v=exchg.150%29.aspx

--

import-module addsdeployment
uninstall-ADDSDomainController -ForceRemoval:$true -Force:$true
https://technet.microsoft.com/en-us/library/jj574104.aspx
http://sysadminconcombre.blogspot.hu/2014/03/scenario-my-test-lab-consists-of-3.html
http://chinnychukwudozie.com/2014/01/27/using-ntdsutil-metada-cleanup-to-remove-a-failedoffline-domain-controller-object/

Finally, check if your DC is really gone:
Detailed list:
Get-ADComputer -LDAPFilter "(&(objectCategory=computer)(userAccountControl:1.2.840.113556.1.4.803:=8192))"
another method to the same detailed list:
Get-ADDomainController -Filter * | Select-Object name
or a simple list:
Get-ADGroupMember 'Domain Controllers'
(note: 'Domain Controllers' string is localized into your language)

Linux facl minihowto

$
0
0
To allow other group members to full access a directory resursively:
setfacl -m d:g:groupname:rwx -R path/foldername
d means default: modifying the default ACL all of newly created files and directories will inherit this setting.
Modify the permissions of existing files and directories only [no default]
setfacl -m g:groupname:rwx foldername

Important notes regarding files: Files can't have default ACL because they can't have child objects. n access ACL for an individual file can override it. If a file has a special ACL that conflicts the inherited ACL, the file ACL wins: owerwrites the inherited one.
Clearing an ACL:
setfacl -x u:johny /path/folder
 

Living with IPFire (bye-bye pfSense)

$
0
0
In the first part of this article I discussed some interesting facts about pfsense. I, again, strongly recommend not to use pfSense 2.2.* in production environments because it is a totally unreliable and buggy system. Okay but what to use then ?
For instance, one can choose IPFire. Yep, I did. It's rock solid, lightning fast and easy to use system. Everything that can't be told about pfSense. I like it.
Except for one minor thing... And that thing is, sadly, not that minor.
For anyone who is familiar with standard iptables chains and logic (I mean input/output/forward/etc) it's very confusing the way pfsense and IPFire virtually handles the traffic.
IPFire consists lots of built-in chains that can be troublesome at the first glance. But you will never get to know about those ones if you use only the GUI based rules editor. I've spent 3 days, frankly, on creating some very basic allow and deny rule on the red0 interface, without any success. That totally screwed me up. You can just never be sure where (I mean, which chain) your web edited rules will be put in. E.g. below shown rules are all faulty, God knows why.
Playing with basic IPFire rules

So I ended up with editing the /etc/sysconfig/firewall.local file and tadaaam, that worked. If you are an expert on iptables, forget your firewall fancy GUI editor forever.

case "$1" in
  start)
        iptables -A CUSTOMINPUT -d 255.255.255.255 -p udp --dport 7437 -j DROP
        iptables -A CUSTOMINPUT -i red0 ! -s 192.168.1.1 -p udp -j DROP
        ;;
  stop)
        iptables -D CUSTOMINPUT -d 255.255.255.255 -p udp --dport 7437 -j DROP
        iptables -D CUSTOMINPUT -i red0 ! -s 192.168.1.1 -p udp -j DROP
        ;;


Just a small side note: reloading the rules with the GUI also reloads your .local defined rules.

OpenVPN and eToken5100 SafeNet token

$
0
0
SafeNet ePass USB token is a PKI authenticator tool. It's fully supported in, of course, Windows operation systems and, also, in Linuxes. A neat but expensive toy. It also can be used with OpenVPN. With Windows. But you will never find any documentation on how to make these two guys work together in Linux! Except for this blog. Follow these steps on a Debian/Ubuntu system: (this worked in a 12.* Ubuntu+Gnome, not tested with newer ones.)
apt-get update
apt-get upgrade
apt-get install openvpn libhal1 hal-info
unzip the stock driver, unzip the .iso and find your proper .deb or .rpm version. In my case, I installed:
dpkg -i SafenetAuthenticationClient-9.0.43-0_amd64.deb
Run your client tool to check if the token works (and you know your password):


Make your sudo system unsecure, lol: (only this line needs to be modificated)
%sudo    ALL=NOPASSWD: ALL
This is needed because we want to use a simple way to run openvpn by root privileges. Don't forget to restart sudo. And here comes the tricky part. Find the hardware id of your token in the command line with:
openvpn --show-pkcs11-ids
Then, your client.config must look like this: (only the bold lines matters:)

client
dev tun
proto udp
remote your.server.com 2001
resolv-retry infinite
nobind
persist-key
persist-tun
ca /etc/openvpn/ca.crt
ns-cert-type server
comp-lzo
verb 3
script-security 2

# for the sake of proper DNS working
up /etc/openvpn/update-resolv-conf
down /etc/openvpn/update-resolv-conf

# this is the connection with the token
pkcs11-providers /usr/lib/libeTPkcs11.so

# your ID goes here
pkcs11-id 'EnterSafe/PKCS\x2315/0250184313021110/ftsafe\x20\x28User\x20PIN\x29/5F4DD36B4A23533FC9BDBB2AC7372236E48F99E5'
or, for example:

pkcs11-id 'SafeNet\x2C\x20Inc\x2E/eToken/0223127c/John\x20token/FC67BBDD7AD8EACD'

Important: don't run the openvpn as a service because you won't see the authentication promt! Instead, in a command line do:
/usr/sbin/openvpn --config /etc/openvpn/client.conf
Entering password
Succesfully typed and connected, you will see:
Connected
Do not close this terminal x-window because the vpn process will die immediately. But the tun interface somehow remains up, so you had better create a "stopopenvpn" script and use it to clean up the processes and interfaces. In my case, that was a
x-terminal-emulator -e "sudo su -c /bin/vpndown"
command, the it called this simple vpndown script in a new window
#!/bin/bash
echo "Please wait..."
killall -9 openvpn
sleep3


The VPN started with a user friendly desktop icon:
x-terminal-emulator -e "/bin/vpnup"
command. That called:
#!/bin/bash
if $(ifconfig|grep tun); then echo "OPENVPN already started, please stop it first. (click -> stopvpn)"
sleep 5
exit 1
fi
sudo su -c "/usr/sbin/openvpn --config /etc/openvpn/client.conf"
echo "Closing interface......"
sleep 5  

The funniest part is the echo Closing interface because that runs only if the openvpn itself is already terminated by the stopvpn in the other window. That is an elegant way to keep the user informed what's going on.
An alternative way to make the connection up without typing anything could be done by the help of the interactive shell expect:
apt-get install except
cat startvpn
#!/usr/bin/expect
spawn sudo su -c "/usr/sbin/openvpn --config /etc/openvpn/client.conf"
expect "Enter John token Password:\r"
send "MyL1ttleP4ssword\r"
interact

That's the screen you never want to see on your FSMO roles holder DC!

Ugly bug in Draytek Vigor firewall?

$
0
0
One day I came across a unique error. A client reported that they were unable to query any nameserver outside their network, except for the case they query standard A records. So, A records worked fine but, e.g. NS or MX records failed with timeout. Local DNS servers was properly set with valid forwarders.
So, we experienced:
nslookup    
Default Server:  dc01.hq.local           
Address:  192.168.80.248                                                                 

>google.org
Server:  dc01.hq.local                   
Address:  192.168.80.248

Non-authoritative answer:                       
Name:    google.org                             
Address:  216.239.32.27                                                                         

>set type=mx 
> google.org                                 
Server:  dc01.hq.local                   
Address:  192.168.80.248                                                                       

DNS request timed out.                              
timeout was 2 seconds.                      
*** Request to dc01.hq.local timed-out   

> server 8.8.8.8                                   
Default Server:  google-public-dns-a.google.com           
Address:  8.8.8.8        

> google.org                            
Server:  google-public-dns-a.google.com        
Address:  8.8.8.8

DNS request timed out.                              
timeout was 2 seconds.                      
*** Request to google-public-dns-a.google.com timed-out      

What a riddle! Guess that! :)
After three hours it turned out that in their Vigor 2925 firewall router there was a built-in rule called "xNETBios > DNS" in the section called "Data filter" (very informative names by Draytek guys, phuhh). That blocked such special DNS queries - even if it was DISABLED!
Default factory settings

Factory settings


In the end I had to disable the entire Data Filter section - in that way, external DNS queries got to work as expected. I'm still unable to find any explanation for this.

Model Name : Vigor2925n
Firmware Version : 3.7.6
Build Date/Time : Nov 17 2014 17:20:57
Working

ntopng install on Debian Sqeeze

$
0
0
If you are careless enough to just follow a step-by-step tutorial like this being on a good old Squeeze you surely will end up with a failing and buggy ntopng. E.g. you won't be able to see your newly created users (users tab is totally empty: No Results Found)
Looks somewhat broken
or can not switch between your monitored interfaces. If you start ntopng from shell you may see something like this:
19/Aug/2015 13:28:28 [src/Redis.cpp:170] ERROR: ERR unknown command 'HSET' [HSET ntopng.host_labels ]
19/Aug/2015 13:28:28 [src/Redis.cpp:170] ERROR: ERR unknown command 'HSET' [HSET ntopng.host_labels ]
19/Aug/2015 13:28:30 [src/Redis.cpp:148] ERROR: ERR unknown command 'HGET'
19/Aug/2015 13:28:30 [src/Redis.cpp:148] ERROR: ERR unknown command 'HGET'
19/Aug/2015 13:28:30 [src/Redis.cpp:148] ERROR: ERR unknown command 'HGET'
19/Aug/2015 13:28:30 [src/Redis.cpp:148] ERROR: ERR unknown command 'HGET'
19/Aug/2015 13:28:36 [src/Redis.cpp:148] ERROR: ERR unknown command 'HGET'
19/Aug/2015 13:28:36 [src/Redis.cpp:148] ERROR: ERR unknown command 'HGET'
19/Aug/2015 13:28:36 [src/Redis.cpp:148] ERROR: ERR unknown command 'HGET'

This whole thing is because your Redis installation is out of date. Another nice thing in Debian Squeeze is its repositories includes Version: 2:1.2.6-1 Redis. Simply fix that with:
echo "deb http://backports.debian.org/debian-backports squeeze-backports main">> /etc/apt/sources.list
apt-get update
apt-get -t squeeze-backports install redis-server

Now it is:
redis-server                       2:2.4.15-1~bpo60+2    
How to reset your forgotten ntopng admin password.
You might don't want to bother with compiling ntopng-2.0 packages on a simple standard Squeeze. In that case here are the x64 and x86 versions. You're welcome.

Exchange 2013 Survival Kit 2.

$
0
0
Just found a great MS doc that efficiently explains the basics of how Exchange 2013 handles Recoverable Items Folder. In short: if one user asks you to restore some accidentely deleted and purged email, you no more need to restore the whole database from Windows Backup and mount it to be able to restore the whole mailbox into a former state. At least, in theory.
If you are lucky enough your user remembers the properties of the emails he purged:
- the senders names, or
- the subject strings, or
- the date interval in which the email(s) was.
Unfortunately, Exchange 2013 can't restore a subfolder in your mailbox. Find out why here.
"This seems like it would be a simple enhancement into the cmdlet since the attribute exists on the mail item object.  It would be my vote to make this enhancement since it make single-item restores almost worthless if a folder is accidentally deleted. [...] Thanks for making my life more difficult than it needs to be Microsoft."
(/me also grateful.)

Clearing a Recoverable Items Folder while Single Item recovery is enabled is a bit problematic. See Use the Shell to clean up the Recoverable Items folder for mailboxes that are placed on hold or have single item recovery enabled

Easiest way to export only the Recoverable Items Folder from the mailbox to a .pst:
New-MailboxExportRequest -mailbox joecool -filepath \\localhost\backup\joe.pst -IncludeFolders "Recoverable Items"
An other interesting method explained here using In Place eDiscovery but there are some limitations. According to MS: "You can use In-Place eDiscovery in the Exchange admin center (EAC) to search for missing items. However, when using the EAC, you can’t restrict the search to the Recoverable Items folder. Messages matching your search parameters will be returned even if they’re not deleted. After they’re recovered to the specified discovery mailbox, you may need to review the search results and remove unnecessary messages before recovering the remaining messages to the user’s mailbox or exporting them to a .pst file.
For details about how to use the EAC to perform an In-Place eDiscovery search, see Create an In-Place eDiscovery search. "
Frankly, I've never done a search like this in EAC. Instead, doing a similar thing in Powershell:
First, search your RIF and place the results to Discovery mailbox.
Search-Mailbox "Joe Cool" -SearchQuery "from:'Sam Knows' AND keyword1" -TargetMailbox "Discovery Search Mailbox" -TargetFolder "JoeRecovery" -LogLevel FullSecond, search the Discovery again with the same phrase and put the results back into your user (or anyone's) mailbox. The results will show in a strange folder structure: in the upper level there is a short report about the search, a .csv attached with the matching files and somewhere deep in the folders you will find the actual mails.
Search-Mailbox "Discovery Search Mailbox" -SearchQuery "from:'Sam Knows' AND keyword1" -TargetMailbox "Joe Cool" -TargetFolder "Recovered Messages" -LogLevel Full -DeleteContent
(Note the DeleteContent switch: it's important to clear up the Discovery Search Mailbox after yourself.)
Putting the results directly into a .pst:
New-MailboxExportRequest -Mailbox "Discovery Search Mailbox" -SourceRootFolder "April Stewart Recovery" -ContentFilter {Subject -eq "April travel plans"} -FilePath \\MYSERVER\HelpDeskPst\AprilStewartRecovery.pst

You can use the EstimateOnly switch to return only get an estimate of the search results and not copy the results to a discovery mailbox. So, just simulating a search to see what would actually happen: (Examples from Microsoft):
New-MailboxSearch "FY13 Q2 Financial Results" -StartDate "04/01/2013" -EndDate "06/30/2013" -SourceMailboxes "DG-Finance" -SearchQuery '"Financial" AND "Fabrikam"' -EstimateOnly -IncludeKeywordStatisticsStart-MailboxSearch "FY13 Q2 Financial Results"
Get-MailboxSearch "FY13 Q2 Financial Results" | FL Name,Status,LastRunBy,LastStartTime,LastEndTime,Sources,SearchQuery,ResultSizeEstimate,ResultNumberEstimate,Errors,KeywordHits

To check a user state:
Get-Mailbox "Joe Cool" | FL SingleItemRecoveryEnabled,RetainDeletedItemsFor
To enable a single user:
Set-Mailbox -Identity "Joe Cool" -SingleItemRecoveryEnabled $true
To enable everybody and raise the default retention time limit:
Get-Mailbox -ResultSize unlimited -Filter {(RecipientTypeDetails -eq 'UserMailbox')} | Set-Mailbox -SingleItemRecoveryEnabled $true -RetainDeletedItemsFor 30
Some more advanced search examples here.

How to destroy your mailboxes permanently


How to purge a disconnected mailbox:
Get-MailboxStatistics –Database <DB NAME> | where {$_.disconnectdate –ne $null} | select displayname,MailboxGUIDRemove-StoreMailbox –Database <Database-Name> -Identity <MailboxGUID-from-the-previous-cmdlet> -MailboxState Disabled(The Remove-StoreMailbox only works against Disconnected and soft-deleted mailboxes!)

Remove all soft-deleted mailboxes:
Get-MailboxStatistics -Database MBD01 | where {$_.DisconnectReason -eq "SoftDeleted"} | foreach {Remove-StoreMailbox -Database $_.database -Identity $_.mailboxguid -MailboxState SoftDeleted}
or
Get-MailboxStatistics -Database MDB01 | where {$_.DisconnectReason -eq "disabled"} | foreach {Remove-StoreMailbox -Database $_.database -Identity $_.mailboxguid -MailboxState disabled -Confirm:$False}  
Hard delete a mailbox (no option to restore it from the actual database!)
Remove-Mailbox <Mailbox> -Permanent:$True

Exchange Survival Kit 3. - hardening and searching

$
0
0
If your servers have any sensitive data about their services (e.g. version numbers) to hide from from the wide world then you definitely want to change some default settings. First, it's adviseble to change your default Exchange SMTP banners and HELO string to hide your long and ugly default intro string.

For the Send Connector(s):

Open your EAC - Mail Flow - Send Connectors - Select your SEND connector and click on Scoping. On the bottom, find FQDN field and fill it implicitly.


For the Receive Connector(s):

You won't be able to change your internal hostname to your FQDN because your will get an obfuscating error. The phenomenon and the solution detailed in this blog. It's a nice trick but personally I don't care about keeping the timestamp and so on. What's more, I don't think anyone care about them.
So simply open your Exchange Powershell and:
Get-ReceiveConnector|select identity,bindings
Find your connector which bound to port 25 and:
Set-ReceiveConnector <ConnectorIdentity> -Banner "220 go ahead and make my day."

Hide your client's IP 


"In practice that means if you sent an email from Outlook, Outlook Web App (OWA) or an ActiveSync-connected smartphone while on the Corporate Wi-Fi, your device’s Corporate Wi-Fi IP address will be contained in the email. If you were connected to your home Internet at the time, your (public) home Internet IP address will be in the email.
This may give a recipient, or any party snooping up the email while in transit, decent clues of the network you were connected to and the whereabouts of your staff and you. " (all credits go to Will Neumann including the pics)





Searching logs for emails

An example worth thousand words! Note the tricky subject selector expression: selects both the "robbery"subjects AND the empty subjects. (because of the -or operator)

Get-MessageTrackingLog -Server [YOUR.CAS.SERVERNAME] -ResultSize Unlimited -Recipients [your.user@domain.com] -Start "9/12/2015 08:59:59" -End (Get-Date).AddHours(-72) | where{$_.sender -like "*@sender.com"}|where{$_.eventid -like "*eceiv*"}|Where-Object {$_.MessageSubject -match "robbery" -or $_.MessageSubject -notlike ""} select eventid,sender,recipients,messagesubject,timestamp -autosize | ConvertTo-Html > "C:\reports\track.html"

It hits and displays the first AND/OR (disjunction again, my favourite operation!) second matched recipients in a GUI:

Get-MessageTrackingLog -recipients john.snow@got.com,aragorn@mordor.org | select-object eventid,timestamp,messageid,sender,recipient,messagesubject | out-gridview
Viewing all 91 articles
Browse latest View live