Quantcast
Channel: a sysadmin'z hard dayz
Viewing all 91 articles
Browse latest View live

How to re-check a resized virtual disk in linux

$
0
0

To recognize a newly added disk:

root@host:#echo "- - -"> /sys/class/scsi_host/host*/scan

To recognize the modified size of old disk:

root@host:# fdisk -l

[...]
Disk /dev/sdb: 11.7 GB, 10737418240 bytes
[...]

Disk /dev/sdb: 214.7 GB, 214748364800 bytesroot@host:# ls /sys/class/scsi_disk/
0:0:0:0  0:0:1:0
root@host:# echo '1'> /sys/class/scsi_disk/0\:0\:1\:0/device/rescan
root@host:# fdisk -l

[...]

Disk /dev/sdb: 236.5 GB, 214748364800 bytes

How to perform an automated brick-level (mailbox level) Exchange 2003 backup

$
0
0
Ohh those were the easy, happy and uncomplicated times when people used Windows 2003 SBS and Exchange 2003 servers. Even if it's EOL now there are still many companies out there where managers don't give a heck to security considerations and warnings.
Restoring a relatively large Exchange database from ntbackup is one of those things that none of the sysadmins are raving about. I mean, restoring the whole database just because a skilled user accidentally deleted an "extreme-important-and-high-business-valuable" email.
It's a known sad fact that Exchange 2003 lacks the feature of keeping soft-deleted items in the database for the retention period. So in the above example you don't have any other choice than restoring everything into a second recovery database. That would be funnier if your server partitions are going full and you have no free space to fill with a second multi-gigs database.
One solution would be to use Exmerge but scripting it is maybe the largest pain in the ass I've ever seen and it still can't export mailboxes larger than 2Gigs. Forget it.
But here is my genious method to backup your users emailing daily. All you need is a Windows backup PC on the network with two hard drives: a smaller for your system partition and a larger one to store the backups. And, an Outlook 2010 installed in that system. (Ehm, just a sidenote: you don't need to activate that Outlook anyway.)

First, you need an account which has all the necessary rights to export databases. Create a user named, for example, exmerge with a super-secure password. Just to be an the safe side and be careless enough, add it to your Administrators group.
Open your System Manager and give all rights to exmerge on your Mailbox Store.




That was everything on your server. Go to your backup PC. Open your Outlook 2010 and set up the account of your exmerge user. Older versions of Outlook are no good because they don't cache shared mailboxes for offline use.
Having done, go and get a coffee.Then:

  •     In Outlook click File tab in the Toolbar
  •     Click Account Settings button, select Account Settings
  •     Select the E-Mail tab
  •     Highlight your mailbox, click the Change button
  •     Click the More Settings button
  •     Select the Advance tab
  •     Click the Add button
  •     Type the first characters of your first user's name and let Outlook resolve it with Add button.
  •     Repeat previous step again and again for all the users in your organization
  •     Click the Apply and Ok buttons
  •     Click Next, Finish, and Close buttons
Now let this PC alone and don't touch it during the next 24 hours. Hopefully one day will be enough to download all the emails your users have. It's a good idea to encrypt both hard disks in this machine because, as you may guessed already, all those highly confidental emails will get in those Outlook and your local system hard disk. The exact location you can find that cache file having .ost extension at is something like:
C:\%your user profile%\Local Settings\Application Data\Microsoft\Outlook\Outlook.ost
It will grow pretty large, similar to the size of your exchange priv1.edb file.

Okay, one day later you will have all emails cached and the Outlook GUI responsible again. Now you need a simple scheduled .bat to start Outlook. Outlook needs a few quiescent hour to syncronize all mailboxes. Let it do its jobs.
Some hours later stop it gracefully via, e.g. a runme.bat file including:
@echo off
cscript "c:\scripts\CloseOutlook.vbs"
:EXIT

and that CloseOutlook.vbs contains:
Dim oOL
Set oOL = CreateObject("Outlook.Application")
oOL.Quit

Then grab your whole folder on your C: (if you want to be sure) and copy it with a cleverly parametered xcopy or with any free backup software (e.g. Cobian Backup) onto your second drive. Don't run out of space! Make sure you keep just the sufficent number of versions of the .ost file.
How to restore? It's easy! DO NOT START your Outlook! Instead, open your Control Panel and find Mail. Open it and select Email accounts.

  • Select the Exchange account, and then click Change.
  • Click More Settings. 
  • Choose whether to work offline or online each time you start Outlook     Click Manually control connection state, and then select the Choose the connection type when starting check box.
  • Exit
  • Start your Outlook and select Offline mode.
  • Find the missing emails within the mailbox in question.
  • I am a hell damn genious!



Playing around with pattern subtitution

$
0
0
The other day I was given a cool task that I should replace the every second occurance of a character in a line. If there are only one of that special char (e.g. a colon) then do nothing. The list itself had tousands of newlines. Digging deep into this task I've collected some nice tricks around the net I wanted to record here.
#!/bin/bash

xxx="This:is:a:test"
echo "0:" `grep -o ":"<<< "$xxx" | wc -l ` # simple count
y="${xxx//[^:]}"        #pattern matching, y= all the chars that matches the char itself
echo "1: ""$y" # prints :::
echo "2: " ${#y} # stands for the lenght of a string = 3
echo "3: " `echo $xxx | awk -F":"'{print $NF}'` # finds the last occurence and cut the original string after there = test
echo "4: " `echo $xxx | awk -F":"'{print length($0)-length($NF)}' ` # similar to above but prints the found char position in the string = 14
end=${xxx##*:}
echo "5: Last : is in column $((${#xxx} - ${#end}))" # same as above
echo "6: " `sed 's/\(.*\):.*/\1/'<<< $xxx` # cuts the string at the last occurence of : and prints the first part
echo "7: " `sed 's/.*\:/\ /g'<<< $xxx` # cuts the string at the last occurence of : and prints the rest = test
echo "8: " `sed 's/\(.*\):/\1!/'<<< $xxx` # replaces the _last_ occurence of : with a !
echo "9: " $xxx| sed 's/t$/!/' # same as above what have to specify the last char
echo "10: " $xxx| sed 's/:/!/2' # replaces the second occurence of : with !
echo "11: " ${xxx##*:} # cuts the string at the last : and prints the rest = test
echo "12: ""${xxx#*:}" # cuts out the first word, prints the rest = "is:a:test"
echo "13: " ${xxx%:*}!!!${xxx##*:} # replaces the last occurence of : with the string: !!!
echo "14: ""${xxx%?}!"  # replaces the very last character of the string with !
echo "15: " ${xxx%:*} # cuts out the last part of the string using separator : ,selecting the first parts.
echo "16: " $xxx | sed "s/:[^:]*$//"  # cuts out the last part of the string using separator : ,selecting the first parts.
echo "17: " `sed -r "s/([^:]*:){2}//"<<< $xxx` # removes the first two parts separeted by : and prints the rest= "a:test"
echo "18: ""${xxx/:/!}" # replace the first occurence without using sed
echo "19: ""${xxx//:/!}" # replace all occurences of : without using sed
echo "20: " ${xxx:5:2} # for the sake of completion, prints = is. (2 chars from the 6th char)
echo "21: " ${xxx,} # converts the first char to lowercase
echo "22: " ${xxx,,} # concerts all to lowercase
echo "23: ""${0##*/}" # prints the name of the script without using basename
#echo $xxx | awk -F: '{print $1 $2 FS $3 $4}'

Veeam Backup & Restore 8.0 installation

$
0
0
I've run into this beauty recently:
[Host] Failed to install deployment service.
The Network path was not found
--tr: Failed to create persistent connection to ADMIN$ shared folder on host [Host].
--tr: Failed to install service [VeeamDeploymentService] was not installed on the host [Host].

Discussed here or here or here
Of course I had everything okay, I reached ADMIN$ share, had Remote Registry Service started and so on, all the other stuff. Found an interesting workaround:
"What happens if you deploy required packages on that server manually, and try to add it to a console afterwards? Required packages are VeeamHvIntegration.msi and VeeamTransport.msi that are located in C:\Program Files\Veeam\Backup and Replication\Backup\Packages. "
Sadly it didn't help either. Finally I got the clue here: "Creating another domain admin credentials fixes the problem."
I don't understand why in hell it failed to install with the default domain administrator but anyway, who cares. Just another a few hours to waste. So create a dedicated domain admin, e.g. veeamdeployer, with a super secure password.

Whooha, success.
My first mighty Veeam Backup backup is in progress!
File Level Restore from a Linux VM is an awesome feature from Veeam

APC Smart-UPS plan

$
0
0
Sometimes it's not easy to plan a complicated UPS shutdown and startup scheme. Here are some pictures about the settings if a SmartUPS X-3000 and its management software. The UPS itself has 3 outlets, two dedicated for servers, hosting virtual machines, and one for the network devices (switches) which always have to shuwdown last and startup first. These settings are optimized by me.

General settings

A nice graphical tool to set the processes...
...in the menu of Shutdown / Outlet sequence
How could the management host, connected to the UPS via USB, shutdown the other servers? It's not a trivial question. The answer is the default.cmd. That's executed by the management software when the general shutdown process is started. Its original content and my additions are the following:

@echo off
rem
rem   Maximize for best viewing
rem   This command file provides examples of proper command file syntax
rem
rem   Command Files run by PowerChute Business Edition must be placed in this directory.
rem
rem   Use the full path name of executable programs and external command files.
rem
rem   The @START command must be used to run executable programs (see example below).
rem   For the @START command, path names that include spaces must be enclosed in quotes;
rem   arguments for the executable must be outside the quotes.  A double quote must
rem   precede the quoted path name.  For example, to execute a command file in
rem   c:\Program Files\APC\PowerChute Business Edition\agent\cmdfiles called myShut.exe,
rem   the following line should be entered in the command file:
rem
rem   @START """c:\Program Files\APC\PowerChute Business Edition\agent\cmdfiles\myShut.exe"
rem
@echo on
NET USE \\my-backup\IPC$ MyPa$$word /USER:my\administrator
shutdown /s /m \\my-backup /c "UPS INITIATED SHUTDOWN!" /t 15



LVM extension

$
0
0
Adding a new raid storage to an existing LVM volume. Real life example. Two new disk added to a raid mirror first.


root@mylinux:~# mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/sdd1 /dev/sde1

 root@mylinux:~# vgdisplay vg1
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.80 TiB
  PE Size               4.00 MiB
  Total PE              471654
  Alloc PE / Size       471654 / 1.80 TiB
  Free  PE / Size       0 / 0
  VG UUID               iIXHn9-h7s1-6oMw-uFvl-BJMk-Jc8N-lEBRX4

root@mylinux:~# lvdisplay vg1
  --- Logical volume ---
  LV Path                /dev/vg1/home
  LV Name                home
  VG Name                vg1
  LV UUID                CcQBbz-2GAZ-TwWm-zVva-RsRW-j1H9-L6djE6
  LV Write Access        read/write
  LV Creation host, time server, 2014-02-26 14:26:05 +0100
  LV Status              available
  # open                 1
  LV Size                1.80 TiB
  Current LE             471654
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:0

root@mylinux:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1
  PV Size               1.80 TiB / not usable 4.81 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              471654
  Free PE               0
  Allocated PE          471654
  PV UUID               cnWVNt-iawf-fJxq-wgm9-dnmb-rB4y-ij5Oyg

  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               VG0
  PV Size               18.61 GiB / not usable 4.88 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              4763
  Free PE               0
  Allocated PE          4763
  PV UUID               3QdqNr-g6yH-fnL6-5jEf-Jt1k-h03Y-2HPz0v


root@mylinux:~# pvcreate /dev/md3
  Physical volume "/dev/md3" successfully created
root@mylinux:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1
  PV Size               1.80 TiB / not usable 4.81 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              471654
  Free PE               0
  Allocated PE          471654
  PV UUID               cnWVNt-iawf-fJxq-wgm9-dnmb-rB4y-ij5Oyg

  --- Physical volume ---
  PV Name               /dev/md1
  VG Name               VG0
  PV Size               18.61 GiB / not usable 4.88 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              4763
  Free PE               0
  Allocated PE          4763
  PV UUID               3QdqNr-g6yH-fnL6-5jEf-Jt1k-h03Y-2HPz0v

  "/dev/md3" is a new physical volume of "3.64 TiB"
  --- NEW Physical volume ---
  PV Name               /dev/md3
  VG Name
  PV Size               3.64 TiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               dQiWVr-yKXE-3l7s-2s1x-y8TD-E1w4-GCc8aF

root@mylinux:~# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1.80 TiB
  PE Size               4.00 MiB
  Total PE              471654
  Alloc PE / Size       471654 / 1.80 TiB
  Free  PE / Size       0 / 0
  VG UUID               iIXHn9-h7s1-6oMw-uFvl-BJMk-Jc8N-lEBRX4

  --- Volume group ---
  VG Name               VG0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               18.61 GiB
  PE Size               4.00 MiB
  Total PE              4763
  Alloc PE / Size       4763 / 18.61 GiB
  Free  PE / Size       0 / 0
  VG UUID               ifFvFY-yt9A-w5g8-af3G-4Kf1-AJdn-Z7531i


root@mylinux:~# vgextend vg1
  Please enter a physical volume path
  Run `vgextend --help' for more information.
root@mylinux:~# vgextend vg1 /dev/md3
  Volume group "vg1" successfully extended
root@mylinux:~# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               5.44 TiB
  PE Size               4.00 MiB
  Total PE              1425483
  Alloc PE / Size       471654 / 1.80 TiB
  Free  PE / Size       953829 / 3.64 TiB
  VG UUID               iIXHn9-h7s1-6oMw-uFvl-BJMk-Jc8N-lEBRX4

  --- Volume group ---
  VG Name               VG0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               18.61 GiB
  PE Size               4.00 MiB
  Total PE              4763
  Alloc PE / Size       4763 / 18.61 GiB
  Free  PE / Size       0 / 0
  VG UUID               ifFvFY-yt9A-w5g8-af3G-4Kf1-AJdn-Z7531i


root@mylinux:~#
root@mylinux:~# lvextend -L+3.6TiB /dev/vg1/home
  Rounding size to boundary between physical extents: 3.60 TiB
  Extending logical volume home to 5.40 TiB
  Logical volume home successfully resized

  root@mylinux:~# vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               5.44 TiB
  PE Size               4.00 MiB
  Total PE              1425483
  Alloc PE / Size       1415373 / 5.40 TiB
  Free  PE / Size       10110 / 39.49 GiB
  VG UUID               iIXHn9-h7s1-6oMw-uFvl-BJMk-Jc8N-lEBRX4

  --- Volume group ---
  VG Name               VG0
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               18.61 GiB
  PE Size               4.00 MiB
  Total PE              4763
  Alloc PE / Size       4763 / 18.61 GiB
  Free  PE / Size       0 / 0
  VG UUID               ifFvFY-yt9A-w5g8-af3G-4Kf1-AJdn-Z7531i

root@mylinux:~# xfs_growfs /dev/vg1/home
meta-data=/dev/mapper/vg1-home   isize=256    agcount=32, agsize=15092928 blks
         =                       sectsz=4096  attr=2
data     =                       bsize=4096   blocks=482973696, imaxpct=5
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=235827, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 482973696 to 1449341952


root@mylinux:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VG0-per    19G  2.5G   17G  14% /
none                  4.0K     0  4.0K   0% /sys/fs/cgroup
udev                  3.9G  4.0K  3.9G   1% /dev
tmpfs                 795M  6.5M  789M   1% /run
none                  5.0M     0  5.0M   0% /run/lock
none                  3.9G     0  3.9G   0% /run/shm
none                  100M     0  100M   0% /run/user
/dev/sdc1             3.7T  1.9T  1.8T  52% /backup
/dev/mapper/vg1-home  5.4T  1.8T  3.7T  32% /home

Three small scrips

$
0
0
Hey there, long time no see. Nothing special news here, just wanted to see googlebots heading here. So here comes my first new year post, 3 minor scripts. First one is for Exchange: adds new email aliases for users from a .csv and makes it default address.

Import-Csv c:\scripts\data.csv -Header UZER,ADDRZ| Foreach{
Set-Mailbox $_.UZER -EmailAddressPolicyEnabled $false
   $user = Get-Mailbox -Identity $_.UZER
   $user.EmailAddresses+=$_.ADDRZ
   Set-Mailbox $user -PrimarySmtpAddress $_.ADDRZ
}


example of data.csv: (without any headers!)
mgibson,mel.gibson@mighty.com
ctom,tom.cruise@mighty.com
cjoe,joe.cool@mighty.com


Second one is a bit tricky. I wanted to list all my distribution groups and their members. There are lots of solutions for this, e.g. you can find an edifying blog entry here. Unfortunatelly most of these scripts don't work nowadays at Office365 Exchange because of this unpleasing nastiness:
Cannot process argument transformation on parameter 'Identity'. Cannot convert value to type "Microsoft.Exchange.Configuration.Tasks.DistributionGroupMemberIdParameter". Error: "Cannot convert hashtable to an object of the following type: Microsoft.Exchange.Configuration.Tasks.DistributionGroupMemberIdParameter. Hashtable-to-Object conversion is not supported in restricted language mode or a Data section."                                                       
Explained on reddit by PsTakuu: It's not the object being passed into the Get-DistributionGroupMember by the pipeline that is causing the issue, it's that you are shoving an entire object into the first positional parameter (Identity) and it doesn't accept hash tables.
Here's a way to recreate your issue:
Get-DistributionGroup | select -First 1 | %{Get-DistributionGroupMember $_}
Here's the way to fix:
Get-DistributionGroup | select -First 1 | %{Get-DistributionGroupMember $_.identity} 

So here is the final working solution:
foreach ($group in Get-DistributionGroup) { get-distributiongroupmember $group.displayname | ft @{expression={$_.displayname};Label="$group"}}
The results can be redirected to file like this: $( foreach (............) )|out-file file.txt
or
$result = foreach (...)
$result | out-file file.txt -append

A +1 powerlist, for bonus: 
Get-DistributionGroup|format-table -wrap -property name,emailaddresses,hiddenfromaddresslistsenabled,RequireSenderAuthenticationEnabled > c:\groups.txt

 The third supersimple linuxer script adds users to a linux system and into samba fileserver database. I don't care about real names, room numbers and so on. That also creates tricky .bat files to make it easier to attach the network drive to windows users later.

#!/bin/bash
while read line; do
uzer=$(echo $line|cut -d ':' -f1)
pazz=$(echo $line|cut -d ':' -f2)
useradd -p $(openssl passwd -1 $pazz) $uzer --shell /bin/false --no-create-home --no-user-group
echo -ne "$pazz\n$pazz\n" | smbpasswd -a -s $uzer
echo "cmdkey /add:192.168.85.254 /user:workgroup\\$uzer /pass:$pazz"> /root/batz/$uzer.bat
echo "net use m: \\\192.168.85.254\\workz /P:Yes">> /root/batz/$uzer.bat
done < users.txt

example of users.txt:
melbigson:jydac3sS
tomcruise:hEieafS
joecool:nhi252ax


Ubiquiti Unify nuisances and the attack of the Martians

$
0
0
Some weeks ago I was given a nice task. A client of ours wants us to set up two new Unify APs in its network with two new wireless networks: one for guest and one for internal use. They had a Vigor 2925 to be used for firewall and DHCP role. For those who are not familiar with the wireless products named ubiquiti unify APs, here are a few links to inform:
How do I configure a "Guest Network" on UniFi AP?
Instructive reading. Based on the infos here I finally decided not to use the internal "firewall" in the APs and let the Vigor do the network separation.
UniFi - Does the controller need to be running at all times?
Official answer says "no, most of the times it is not necessary." Unfortunately this isn't entirely true with the latest firmwares. :/
No worries, since the client already had a Linux server, it looked so simple to install the controler software and setup the nodes. Sadly, everything went a different way.
For a mystical reason I couldn't make the controller software, running on the Linux, see its APs, even they were in the same subnet by their IPs and in the same broadcast domain, for the sake of Layer2 communications. I spent two days just on this riddle. Maybe it was a misconfiguration of the D-Link switches or maybe an insolvable incompatibility issue. I don't know why to this very day. :( I tried everything but the time run out so I had to find a quick solution.
So I decided to use a different network card and a second subnet on the Linux only to control the APs.
I ended up with this config:
My interfaces were:
em1       Link encap:Ethernet  HWaddr 00:25:90:xx:xx:xx
          inet addr:172.16.20.30  Bcast:172.16.20.255  Mask:255.255.255.0
          inet6 addr: fe80::225:90ff:fed3:930c/64 Scope:Link
..
em2       Link encap:Ethernet  HWaddr 00:25:90:xx:xx:xx
          inet addr:192.168.3.200  Bcast:192.168.3.255  Mask:255.255.255.0
          inet6 addr: fe80::225:90ff:fed3:930d/64 Scope:Link
..
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
..

APs looked working. I think Connected(limited) state is normal in this case. Note that I didn't use the built-in "guest network" feature because it's just ridiculous.


 So everything seemed properly set. But to my greatest astonishment I coudn't reach network share of my server lying on 172.16.20.30 from my internal wifi client 192.168.3.11. When I started to ping and tcpdump'ed on the server I saw that echo requests came in but replies never went back. I thought to myself: Seeing the fact that the kernel wanted to reply on the other interface (192.168.3.x) it's hardly surprising that it didn't work.
So I set IP policy routing: if the packet comes from 192.168.3.11 on em1, reply to it on the same interface -em1- instead of em2. You know, all the iptables mangle MARK and ip route add default via 172.16.20.1 dev em1 table ... stuff, etc. etc. etc.
It didn't work either. Suddenly a light dawned on me. I turned on kernel martian packet logging with echo 1 > /proc/sys/net/ipv4/conf/all/log_martians
and VOILA I saw:

Mar  2 20:08:03 superserver kernel: [ 2755.407570] IPv4: martian source 192.168.3.11 from 192.168.3.200, on dev em1
Mar  2 20:08:03 superserver kernel: [ 2755.407590] ll header: 00000000: ff ff ff ff ff ff 00 25 90 d3 93 0d 08 06        .......%......
Mar  2 20:08:04 superserver kernel: [ 2756.424025] IPv4: martian source 192.168.3.11 from 192.168.3.200, on dev em1
Mar  2 20:08:04 superserver kernel: [ 2756.424048] ll header: 00000000: ff ff ff ff ff ff 00 25 90 d3 93 0d 08 06        .......%......
Mar  2 20:08:05 superserver kernel: [ 2757.421639] IPv4: martian source 192.168.3.11 from 192.168.3.200, on dev em1
Mar  2 20:08:05 superserver kernel: [ 2757.421661] ll header: 00000000: ff ff ff ff ff ff 00 25 90 d3 93 0d 08 06        .......%......

It's a bit confusing isn't it ?!?! 192.168.3.200 is my own server!
I tried to turn off the Martian protection with echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter
but I learned: such a bad routing problem can not be solved with a simple fix like this.
I was thinking very hard for an hour and finally I faked the kernel with an another subnet set on my second interface:

auto em2
iface em2 inet static
    address 192.168.3.200
    netmask 255.255.255.192


Now it's working as expected:
root@:/# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.16.20.1     0.0.0.0         UG    0      0        0 em1
172.16.20.0     0.0.0.0         255.255.255.0   U     0      0        0 em1
192.168.3.192   0.0.0.0         255.255.255.192 U     0      0        0 em2

These were all done on a:
Linux 3.11.0-12-generic #19-Ubuntu SMP Wed Oct 9 16:20:46 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Debian Wheezy Mail Server – Postfix Dovecot Sasl MySQL PostfixAdmin and RoundCube

$
0
0
Shamefully I didn't want to find my own way so the whole tutorial I followed is here.
For my personal further usage, I attached the working nginx, dovecot, postfix and php5 config to this post. There are two minor differences from the original tutorial: I don't use spam filtering because at me it's done by a 3rd party provider. Second, I use an outgoing TLS smarthost via mail submission 587 port, detailed in the postfix/main.cnf.
Note that sensitive infos are all removed and in the tgz there is a missing sock because
tar example/php5/fpm/socks/ssl_example.com.sock: socket ignored.
Follow the original howto first.

Versions for the tgz are:
Linux box 3.2.0-4-amd64 #1 SMP Debian 3.2.73-2+deb7u3 x86_64 GNU/Linux
ii  nginx                              1.2.1-2.2+wheezy4                 all          small, powerful, scalable web/proxy server
ii  nginx-common                       1.2.1-2.2+wheezy4                 all          small, powerful, scalable web/proxy server - common files
ii  nginx-full                         1.2.1-2.2+wheezy4                 amd64        nginx web/proxy server (standard version)
ii  dovecot-common                     1:2.1.7-7+deb7u1                  all          Transitional package for dovecot
ii  dovecot-core                       1:2.1.7-7+deb7u1                  amd64        secure mail server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-gssapi                     1:2.1.7-7+deb7u1                  amd64        GSSAPI authentication support for Dovecot
ii  dovecot-imapd                      1:2.1.7-7+deb7u1                  amd64        secure IMAP server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-ldap                       1:2.1.7-7+deb7u1                  amd64        LDAP support for Dovecot
ii  dovecot-lmtpd                      1:2.1.7-7+deb7u1                  amd64        secure LMTP server for Dovecot
ii  dovecot-mysql                      1:2.1.7-7+deb7u1                  amd64        MySQL support for Dovecot
ii  dovecot-pgsql                      1:2.1.7-7+deb7u1                  amd64        PostgreSQL support for Dovecot
ii  dovecot-pop3d                      1:2.1.7-7+deb7u1                  amd64        secure POP3 server that supports mbox, maildir, dbox and mdbox mailboxes
ii  dovecot-sieve                      1:2.1.7-7+deb7u1                  amd64        sieve filters support for Dovecot
ii  dovecot-sqlite                     1:2.1.7-7+deb7u1                  amd64        SQLite support for Dovecot
ii  postfix                            2.9.6-2                           amd64        High-performance mail transport agent
ii  postfix-mysql                      2.9.6-2                           amd64        MySQL map support for Postfix
ii  php5-common                        5.5.33-1~dotdeb+7.1               amd64        Common files for packages built from the php5 source
ii  php5-fpm                           5.5.33-1~dotdeb+7.1               amd64        server-side, HTML-embedded scripting language (FPM-CGI binary)
ii  php5-imap                          5.5.33-1~dotdeb+7.1               amd64        IMAP module for php5
ii  php5-intl                          5.5.33-1~dotdeb+7.1               amd64        internationalisation module for php5
ii  php5-mcrypt                        5.5.33-1~dotdeb+7.1               amd64        MCrypt module for php5
ii  p
hp5-mysql                         5.5.33-1~dotdeb+7.1               amd64        MySQL module for php5


Ban / reject users with freeradius based on MAC addresses

$
0
0
Freeradius is a common tool if someone wants to set up an enterprise WiFi authentication. But if it's in a public institude, e.g. a school, sooner or later your WiFi users' passwords will leak out and after password changes your logs get full of incorrect logins from the mischievous studends. Solution: build a script that scan the logfile for incorrect logins and ban the MAC addresses of those devices. Here is a little help on how to start thinking:
add the following to your /etc/freeradius/modules/files

files rejectmac {
                key = "%{Calling-Station-ID}"
                usersfile = ${confdir}/rejectmacaddress.txt
                compat = no
        }


add the following to authorize{} section of your /etc/freeradius/sites/sites-enabled/default

rejectmac
        if (ok) {
            reject
        }


create a new file /etc/freeradius/rejectmac.conf and add 
passwd rejectmac {
  filename = /etc/freeradius/rejectmacaddress.txt
      delimiter = ,
      format = "*Calling-Station-Id"
}


create a new file /etc/freeradius/rejectmacaddress.txt and fill it with the kiddies MACs like this
78-F8-82-F3-8F-58,B4-CE-F6-4D-74-93,B0-45-19-C6-17-D1,50-F0-D3-1D-42-CE,00-5A-05-90-08-FE,88-07-4B-D1-17-15

add this to the beginning of your radiusd.conf
$INCLUDE rejectmac.conf

restart your freeradius daemon and get ready to go home.


A Mikrotik guest network can be more difficult than you may think

$
0
0
In recent RouterOS is a single click to set up a guest wifi AP. Saying guest I mean such a network that is fully or partly allowed to reach public internet but denied to reach the internal private network. Here is a simple howto about adding a second wifi AP/ slave interface. The only problem with that is it's unsecure. :( A most common way is using the QuickSet method. Everyone knows what to do seeing this window:
So if you I build a second AP like this:

it's going to use the same DHCP server as the internal WIFI. Obviously, because it's on the same bridge (switch) interface. I always wondered how they are still separeted by the RouterOS? The answer is Mikrotik's genius Layer2 firewall called Bridge filtering.

But you discover an embarassing problem if you have more IP subnets (e.g. VPN networks over pub net) and also want to accept the guest wifi filtering to them. One simply can't utilize Layer2 filtering over Layer3 routing and, of course, there is no work vice versa.

Soution: forget the built-in bridge and create a new bridge only for your guest wifi.
/interface bridge add name=bridge-guestwifi
Add a new security profile for guest if you happen to still doesn't have any:  
/interface wireless security-profiles add authentication-types=wpa2-psk mode=dynamic-keys name=guestwifi wpa2-pre-shared-key=topsecretpassword
Add your new slave interface:
 /interface wireless
add disabled=no mac-address=D6:CA:6E:4F:54:28 master-interface=wlan1 name=wlan2 security-profile=guest ssid="For Guests" wds-default-bridge=bridge-guestwifi
and link these 2 to each other.
/interface bridge port add bridge=bridge-guestwifi interface=wlan2

So far so good. Layer2 filtering is done now. But now the guests are totally separeted from your DHCP server so you need to create a new, dedicated DHCP pool for them. It requires a new address and subnet.
/ip address add address=192.168.100.1/24 interface=bridge-guest network=192.168.100.0
/ip pool add name=guest ranges=192.168.100.100-192.168.100.254
/ip dhcp-server add address-pool=guest disabled=no interface=bridge-guest name=guest
/ip dhcp-server network add address=192.168.100.0/24 dns-server=192.168.100.1 gateway=192.168.100.1

Lets suppose that you have such a source nating rule that nats anything that is going out to the internet:
In that case we have good news. You don't have to set up any more nat rule because the guest network will hit the above rule. But it's not secured yet. The following Layer3 high priority firewall rule will take care of them:
/ip firewall filter
add action=drop chain=forward in-interface=bridge-guestwifi out-interface=!ether1-gateway
So from now on, guests are denied to go anywhere but the public internet.

GlusterFS in a simple way

$
0
0
Here is the story how I managed to install a 2 node glusterfs on CentOS and one client for test purposes.
In my case the hostnames and the IPs were:

192.168.183.235 s1
192.168.183.236 s2
192.168.183.237 c1

Append these to the end of /etc/hosts to make sure that simple name resolution will work.
Execute the followings on both servers.

rpm -ivh  http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm 
wget  -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.5/CentOS/glusterfs-epel.repo 
yum -y install glusterfs glusterfs-fuse glusterfs-server

It's no need to install any of samba packages if you don't intend to use smb.

systemctl enable glusterd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.

Both servers had a second 20G capacity disk named sdb. I created two LV's for two bricks.

[root@s2 ~]# lvcreate -L 9G -n brick2 glustervg
 Logical volume "brick2" created.
[root@s2 ~]# lvcreate -L 9G -n brick1 glustervg
 Logical volume "brick1" created.
[root@s1 ~]# vgcreate glustervg /dev/sdb
 Volume group "glustervg" successfully created
[root@s1 ~]# lvcreate -L 9G -n brick2 glustervg
 Logical volume "brick2" created.
[root@s1 ~]# lvcreate -L 9G -n brick1 glustervg
 Logical volume "brick1" created.
[root@s2 ~]# pvdisplay

  --- Physical volume ---
  PV Name               /dev/sdb
  VG Name               glustervg
  PV Size               20.00 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              5119
  Free PE               511
  Allocated PE          4608
  PV UUID               filZyX-wR7W-luFX-Asyn-fYA3-f7tf-q4xGyU
[...]

[root@s2 ~]# lvdisplay

  --- Logical volume ---
  LV Path                /dev/glustervg/brick2
  LV Name                brick2
  VG Name                glustervg
  LV UUID                Rx3FPi-S3ps-x3Z0-FZrU-a2tq-IxS0-4gD2YQ
  LV Write Access        read/write
  LV Creation host, time s2, 2016-05-18 16:02:41 +0200
  LV Status              available
  # open                 0
  LV Size                9.00 GiB
  Current LE             2304
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

  --- Logical volume ---
  LV Path                /dev/glustervg/brick1
  LV Name                brick1
  VG Name                glustervg
  LV UUID                P5slcZ-dC7R-iFWv-e0pY-rvyb-YrPm-FM7YuP
  LV Write Access        read/write
  LV Creation host, time s2, 2016-05-18 16:02:43 +0200
  LV Status              available
  # open                 0
  LV Size                9.00 GiB
  Current LE             2304
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:4
[...]

 

[root@s1 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/glustervg/brick2
  LV Name                brick2
  VG Name                glustervg
  LV UUID                7yC2Wl-0lCJ-b7WZ-rgy4-4BMl-mT0I-CUtiM2
  LV Write Access        read/write
  LV Creation host, time s1, 2016-05-18 16:01:56 +0200
  LV Status              available
  # open                 0
  LV Size                9.00 GiB
  Current LE             2304
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2

  --- Logical volume ---
  LV Path                /dev/glustervg/brick1
  LV Name                brick1
  VG Name                glustervg
  LV UUID                X6fzwM-qdRi-BNKH-63fa-q2O9-jvNw-u2geA2
  LV Write Access        read/write
  LV Creation host, time s1, 2016-05-18 16:02:05 +0200
  LV Status              available
  # open                 0
  LV Size                9.00 GiB
  Current LE             2304
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
[...]
 

[root@s1 ~]# mkfs.xfs /dev/glustervg/brick1
 

meta-data=/dev/glustervg/brick1  isize=256    agcount=4, agsize=589824 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2359296, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


[root@s1 ~]# mkfs.xfs /dev/glustervg/brick2

meta-data=/dev/glustervg/brick2  isize=256    agcount=4, agsize=589824 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=2359296, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0


[root@s1 ~]# mkdir -p /gluster/brick{1,2}
[root@s2 ~]# mkdir -p /gluster/brick{1,2}
[root@s1 ~]# mount /dev/glustervg/brick1 /gluster/brick1 && mount /dev/glustervg/brick2 /gluster/brick2
[root@s2 ~]# mount /dev/glustervg/brick1 /gluster/brick1 && mount /dev/glustervg/brick2 /gluster/brick2



Add the following to a newline in both /etc/fstab:


/dev/mapper/glustervg-brick1 /gluster/brick1 xfs rw,relatime,seclabel,attr2,inode64,noquota 0 0
/dev/mapper/glustervg-brick2 /gluster/brick2 xfs rw,relatime,seclabel,attr2,inode64,noquota 0 0


[root@s1 etc]# systemctl start glusterd.service

Making sure:
[root@s1 etc]# ps ax|grep gluster

 1010 ?        Ssl    0:00 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO[root@s1 etc]# gluster peer probe s2
peer probe: success.


[root@s2 etc]# gluster peer status
Number of Peers: 1
Hostname: 192.168.183.235
Uuid: f5bdc3f3-0b43-4a83-86c1-c174594566b9
State: Peer in Cluster (Connected)


[root@s1 etc]# gluster pool list
UUID                                    Hostname        State
01cf8a70-d00f-487f-875e-9e38d4529b57    s2              Connected
f5bdc3f3-0b43-4a83-86c1-c174594566b9    localhost       Connected

[root@s1 etc]# gluster volume status
No volumes present

[root@s2 etc]# gluster volume info
No volumes present

[root@s1 etc]# mkdir /gluster/brick1/mpoint1
[root@s2 etc]# mkdir /gluster/brick1/mpoint1
[root@s1 gluster]# gluster volume create myvol1 replica 2 transport tcp s1:/gluster/brick1/mpoint1 s2:/gluster/brick1/mpoint1

volume create: myvol1: failed: Staging failed on s2. Error: Host s1 is not in 'Peer in Cluster' state

Ooooops....
[root@s2 glusterfs]# ping s1ping: unknown host s1I forgot to check name resolution. When i fixed this and tried to create it again, i got:
[root@s1 glusterfs]# gluster volume create myvol1 replica 2 transport tcp s1:/gluster/brick1/mpoint1 s2:/gluster/brick1/mpoint1
volume create: myvol1: failed: /gluster/brick1/mpoint1 is already part of a volume
 
 WTF ??
[root@s1 glusterfs]# gluster volume get myvol1 all
volume get option: failed: Volume myvol1 does not exist
[root@s1 glusterfs]# gluster
gluster>
exit         global       help         nfs-ganesha  peer         pool         quit         snapshot     system::     volume
gluster> volume
add-brick      bitrot         delete         heal           inode-quota    profile        remove-brick   set            status         tier
attach-tier    clear-locks    detach-tier    help           list           quota          replace-brick  start          stop           top
barrier        create         get            info           log            rebalance      reset          statedump      sync

gluster> volume l
list  log
gluster> volume list
No volumes present in cluster

That's odd! Hmm. I thought it'd work: 
[root@s1 /]# rm /gluster/brick1/mpoint1
[root@s1 /]# gluster volume create myvol1 replica 2 transport tcp s1:/gluster/brick1/mpoint1 s2:/gluster/brick1/mpoint1volume create: myvol1: success: please start the volume to access data

[root@s1 /]# gluster volume list

myvol1

Yep. Success. Phuhh.
[root@s1 /]# gluster volume start myvol1
volume start: myvol1: success

[root@s2 etc]# gluster volume list

myvol1
[root@s2 etc]# gluster volume status
Status of volume: myvol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick s1:/gluster/brick1/mpoint1            49152     0          Y       2528
Brick s2:/gluster/brick1/mpoint1            49152     0          Y       10033
NFS Server on localhost                     2049      0          Y       10054
Self-heal Daemon on localhost               N/A       N/A        Y       10061
NFS Server on 192.168.183.235               2049      0          Y       2550
Self-heal Daemon on 192.168.183.235         N/A       N/A        Y       2555

Task Status of Volume myvol1
------------------------------------------------------------------------------
There are no active volume tasks

[root@s1 ~]# gluster volume create myvol2 s1:/gluster/brick2/mpoint2 s2:/gluster/brick2/mpoint2  force
volume create: myvol2: success: please start the volume to access data
[root@s1 ~]# gluster volume start myvol2
volume start: myvol2: success
[root@s1 ~]# gluster volume info
Volume Name: myvol1
Type: Replicate
Volume ID: 633b765b-c630-4007-91ca-dc42714bead4
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: s1:/gluster/brick1/mpoint1
Brick2: s2:/gluster/brick1/mpoint1
Options Reconfigured:
performance.readdir-ahead: on

Volume Name: myvol2
Type: Distribute
Volume ID: ebfa9134-0e6a-40be-8045-5b16436b88ed
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: s1:/gluster/brick2/mpoint2
Brick2: s2:/gluster/brick2/mpoint2
Options Reconfigured:
performance.readdir-ahead: on

On the client:

[root@c1 ~]# wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
[...]
[root@c1 ~]# yum -y install glusterfs glusterfs-fuse
[....]
[root@c1 ~]# mkdir  /g{1,2}
[root@c1 ~]# mount.glusterfs s1:/myvol1 /g1
[root@c1 ~]# mount.glusterfs s1:/myvol2 /g2
[root@c1 ~]# mount
[...]
s1:/myvol1 on /g1 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
s2:/myvol2 on /g2 type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@c1 ]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root   28G  1.1G   27G   4% /
devtmpfs                 422M     0  422M   0% /dev
tmpfs                    431M     0  431M   0% /dev/shm
tmpfs                    431M  5.7M  426M   2% /run
tmpfs                    431M     0  431M   0% /sys/fs/cgroup
/dev/sda1                494M  164M  331M  34% /boot
tmpfs                     87M     0   87M   0% /run/user/0
s1:/myvol1               9.0G   34M  9.0G   1% /g1 [9G,9G because of replicating (aka RAID1 over network))
s2:/myvol2                18G   66M   18G   1% /g2 (9G+9G because of distributing (aka JBODover network))

What is the difference between distributing and striping? Here are two short sniplets from glusterhacker blog:
Distribute : A distribute volume is one, in which all the data of the volume, is distributed throughout the bricks. Based on an algorithm, that takes into account the size available in each brick, the data will be stored in any one of the available bricks. [...] The default volume type is distribute, hence my myvol2 got distributed.
Stripe: A stripe volume is one, in which the data being stored in the backend is striped into units of a particular size, among the bricks. The default unit size is 128KB, but it's configurable. If we create a striped volume of stripe count 3, and then create a 300 KB file at the mount point, the first 128KB will be stored in the first sub-volume(brick in our case), the next 128KB in the second, and the remaining 56KB in the third. The number of bricks should be a multiple of the stripe count.

The very useable official howto is here.
   
Performance test, split brain, to be continued....

Docker minihowto

$
0
0
To start a new container. If does not exists locally, it downloads a stock one from dockerhub.
docker run -i -t centos:latest /bin/bash
(-i: interactive mode) (-t: runs centos image) (starts a command, here a shell)
List running docker containers: docker ps
List running docker containers + history : docker ps -a
List docker local images: docker images
Escape from a container and put that running one in the background: CTRL-P+CTRL-Q - or run it with -exec: docker exec -ti [CONTAINER-ID] bash
It starts a new process with bash shell, and you could escape from it by ^c directly, it won't affect the original process.

On the host find the docker virtual files (aufs), confs, etc. here: /var/lib/docker
See details about an image: docker inspect IMAGENAME(e.g. centos:latest)OR ITS_RANDOM_NAME | less
To build a new container: docker build -t MYIMAGENAME . (.=where my DOCKERFILE is)

an example DOCKERFILE content looks like:

FROM ubuntu:latest
RUN apt-get update
RUN apt-getinstall -y wget
RUN apt-getinstall -y build-essential tcl8.5
RUN wget http://download.redis.io/releases/redis-stable.tar.gz
RUN tar xzf redis-stable.tar.gz
RUN cd redis-stable &&make&&makeinstall
RUN ./redis-stable/utils/install_server.sh
EXPOSE 6379
ENTRYPOINT ["redis-server"]
 
To see the standard output of a container: docker logs CONTAINERNAME
docker run -d centos:latest -p 3000:3000 --name my-service
(starts in the background) (maps hosts's port 3000 (on all interfaces) to container's service port 3000)
To enter inside a container with bash:  docker exec -i -t my-service /bin/bash
Tag (set an alias name for) an image: docker tag IMAGE_ID (seen in the output of docker images) REPONAME:TAG (e.g. mydockeruser/myrepo:2)
Now see what you have tagged: docker images
Enter dockerhub with your dockerhub login: docker login
Push your new built image into your pub repository: docker push REPONAME:TAG
Remove an image from localhost repository: docker rmi IMAGE_ID (force with -f)
For example, to start a new mariadb instance:
docker run --name mariadb-1 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mypass -v /home/ubuntu/db/db1:/var/lib/mysql -d mariadb
(with -v you mount your localhost's folder into your container)(with -e you pass an environment variable to the container.
Passing a global variable, for example: docker run -i -t -e "WHOISTHEKING=me" ubuntu:14.04 /bin/bash -> echo $WHOISTHEKING)

Insert a file into the container directly from outside:
docker insert CONTAINERNAME http://ftp.drupal.org/files/projects/drupal-7.22.tar.gz /root/drupal.tar.gz
To commit your changes to the image: docker commit -m "commit message" -a "Your Name" IMAGENAME username/my-redis:latestTO BE CONTINUED

More Powershell

$
0
0
The original idea was to ease the process of creating a new distribution group with one human member and an archive public folder regularly. These mail enabled security groups and public folders always get their names based on a company standard: Contoso GROUPNAME and Contoso_Groupname_Archive. The most exciting part of it is the waiting loop: we've got to make sure the the new group is created and replicated over the DCs in the domain before going on. Have to be run in an Exchange Shell.
Two minor notes: pfvieweris a special company group in which all the viewer right assigned users are. Jane.manager1 and john.manager2 are the company head managers.

Import-Module activedirectory
$ShName = Read-Host "Please specify the new groupname, e.g.: TechGroup1"
$Name = "Contoso "+$ShName
if (!(dsquery group -samid $Name)){ NEW-ADGroup -name $Name -groupscope 2 -path "OU=ContMailLists,DC=co,DC=local" }else{Write-Host "WARNING: ADGroup already exists. PRESS CTRL+C to exit or take the consequences."}
$DotName ="contoso."+$ShName
$EmailADD = $DotName+"@contoso.com"
$PFName = "Contoso_"+$ShName+"_Archiv"
$Ember = Read-Host "Specify the login name of the user going to be a member of this group. E.g.: john.smith"
$FullPFName = "\"+$PFName
$PFEmail = "contoso"+$ShName+"Archiv@contoso.com"
$IfGroupExists = Get-DistributionGroup -name $Name -ErrorAction 'SilentlyContinue'
  if( $
IfGroupExists)
      {
      $IFSTOP = Read-Host "This distribution group already exists! Press CTRL+C-t to exit"
   }
Write-Host -NoNewline "Please wait a bit. Shouldn't take long"
    Do
    {
        If($Idx -gt 0) {Start-sleep -s 2}
        $r = Get-ADGroup -Filter {SamAccountName -eq $Name}
        Write-Host -NoNewline "."
        $Idx = $Idx + 1
    }
    Until($r)

Enable-DistributionGroup -Identity "CN=$Name,OU=ContMailLists,DC=wt,DC=local" -Alias $DotName
Set-DistributionGroup -Identity $Name -ManagedBy co.local\Admin -BypassSecurityGroupManagerCheck
Set-DistributionGroup -Identity $Name -RequireSenderAuthenticationEnabled 0 -PrimarySmtpAddress $EmailADD -WindowsEmailAddress $EmailADD -EmailAddressPolicyEnabled 0 -Alias $DotName -GrantSendOnBehalfTo jane.manager1, john.manager2, $Ember
New-PublicFolder -Name $PFName -Path \
Enable-MailPublicFolder -Identity $FullPFName -HiddenFromAddresslistsEnabled 1
Set-MailPublicFolder -Identity $FullPFName -EmailAddressPolicyEnabled 0
Set-MailPublicFolder -Identity $FullPFName -EmailAddresses $PFEmail
Add-PublicFolderClientPermission -Identity $FullPFName -accessrights ReadItems,CreateItems,FolderVisible -user pfviewer
Remove-PublicFolderClientPermission -Identity $FullPFName -accessrights ReadItems,EditOwnedItems,DeleteOwnedItems,FolderVisible -user default -Confirm:$false
Add-DistributionGroupMember -Identity $Name -member $PFName
Add-DistributionGroupMember -Identity $Name -member $Ember

File access auditing on a Windows fileserver: Data Leakage Prevention

$
0
0
Here is a clever script concept that helps company managers notifying someone's unusual amount of file reading. That's typical behaviour for an employee who is intended to quit and try to steal all the files of that company. Such auditing softwares are on the market for several hundred or thousand bucks!
Luckily for you, I've written one in bash. OK that's not good news for ones who use only Windows. But it can be easily portable to any script language, for example, php so that it could be run directly in the Windows fileserver or DC by installing the proper runtime enviroment. (PHP, ruby, python, etc.)
Exploring that thought further, now I'm going to translate that for myself. ;) But for now, it's enough to get it work in bash.

The original idea is that we suppose that all the users open almost the same amount of files daily on their daily routines. This script always alerts when a statistical threshold percent reached per user.
In the following example you are going to see a nice solution for lab use in which I transfer the logfile from the Windows server to a Linux server to be able to run the bash script on it. You can find detailed comments inside the script.

Step-by-step installation:
1: Enable audit log policy on your Windows Server, assign it to the target folders and test it
(Note: in the above blog you can find an advanced example. In my case I look for event id 4663 because it just contains the information I need.) Set the audit rules according to your needs. The less eventrule the better. We need to trace file reads so the first rule is a must.


2: You need to export the specific events from the security log to a plain file. So create a getsec.ps1 file in c:\script\ with the following content:
Get-EventLog security -After (Get-Date).AddDays(-1) -InstanceId 4663 |select Message|ft -AutoSize -Wrap > c:\auditing\report.txt
3: Also, don't forget to create that c:\auditing folder and then put an empty file into it named: mounted

 4: Schedule the script to run at the end of the working hours or at midnight. The command is to be: (e.g.) C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe and the argument (e.g.): -executionpolicy bypass -file c:\scripts\getsec.ps1  2>&1 > C:\scripts\log.txt
5: Share c:\auditing folder with a dedicated user that is intended to be used only by the Linux server, e.g.: linuxsrv
6: On your linux box, install the following packages: cifs-utils dos2unix mutt iconv
7: Test your connection:
 [ -f /mnt/mounted ] || mount.cifs //192.168.xx.xx/auditing/ /mnt/ -o username=linuxsrv,password=Sup3rS3cur3P4$$,domain=contoso
8: Create the base directories in, e.g.
mkdir /root/auditor && cd /root/auditor
mkdir archive average stat users; echo "0"> counter

Having succeeded, congratulations, now you are ready to track your file access activity and watch out for possible data stealing FOR FREE!


Here is the mighty script. See comments inline!


Howto setup Icinga2 and Icingaweb on CentOS

$
0
0
On your newly installed CentOS server:
 
# this is my network setup for my own usage, won't fit yours :)
cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
NAME="eth0"
UUID="2ef9cace-1428-4dbf-aac7-7993463c359a"
DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.183.235
NETMASK=255.255.254.0
NETWORKING=yes
HOSTNAME=s1
GATEWAY=192.168.183.254
NM_CONTROLLED=no
 
yum -y install deltarpm
yum -y install wget net-tools bind-utils gcc mc
setenforce 0 # :(
mcedit /etc/selinux/config
>> change enabled to SELINUX=disabled or SELINUX=permissive
yum -y update && yum -y upgrade
yum install -y epel-release
rpm --import http://packages.icinga.org/icinga.key
wget http://packages.icinga.org/epel/ICINGA-release.repo -O /etc/yum.repos.d/ICINGA-release.repo
yum makecache
yum install -y nagios-plugins-all icinga2 icinga2-ido-mysql icinga-idoutils-libdbi-mysql
yum install -y httpd php-cli php-pear php-xmlrpc php-xsl php-pdo php-soap php-gd php-ldap
mcedit /etc/php.ini
>> set date.timezone = Europe/YOURZONE
systemctl enable httpd && systemctl start httpd
yum install -y mariadb-server
systemctl start mariadb
systemctl enable mariadb
netstat -nlp | grep 3306 #(check if it runs)
mysql -u root
> use mysql;
> update user set password=PASSWORD("root_password") where User='root';
> flush privileges;
> exit
systemctl restart mariadb
mysql -u root -p
>CREATE DATABASE icinga2;
>GRANT SELECT, INSERT, UPDATE, DELETE, DROP, CREATE VIEW, INDEX, EXECUTE ON icinga2.* TO 'icinga2'@'localhost' IDENTIFIED BY 'icinga2_password';
>flush privileges;
>exit
mysql -u root -p icinga2 < /usr/share/icinga2-ido-mysql/schema/mysql.sql
mcedit /etc/icinga2/features-available/ido-mysql.conf
>> change: user = "icinga2"
>> password = "icinga2_password"
>> host = "localhost"
>> database = "icinga2"
systemctl enable icinga2 && systemctl start icinga2
tail -f /var/log/icinga2/icinga2.log #(check if it runs)
icinga2 feature enable command
icinga2 feature list # (to check)
systemctl restart icinga2
yum -y install icingaweb2 icingacli
grep icingaweb2 /etc/group #check if it's icingaweb2:x:990:apache
touch /var/www/html/index.html
chown apache /var/www/html/index.html
icingacli setup config directory --group icingaweb2
icingacli setup token create # get the token to the clipboard
icingacli setup token show # in case you missed it
systemctl restart httpd
# open a browser and type the IP address or FQDN of your server. That will be icinga.infokom.local for my case.
#next, next, you should see everything green
 


>authentication : database
>Database type: MySQL
>Host: localhost
>Database name: icingaweb2
>Username: myself
>Password: *********
>Character set: utf8
#rest of the web based setup detailed here with screenshots: 
#
#Now it's time to add your first node to your server.
#On the server, run:
 
icinga2 node wizard
Welcome to the Icinga 2 Setup Wizard!

We'll guide you through all required configuration details.

Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]: n
Starting the Master setup routine...
Please specifiy the common name (CN) [icinga.infokom.local]: Press Enter
Checking for existing certificates for common name 'icinga.infokom.local'...
Certificates not yet generated. Running 'api setup' now.
information/cli: Generating new CA.
information/base: Writing private key to '/var/lib/icinga2/ca/ca.key'.
information/base: Writing X509 certificate to '/var/lib/icinga2/ca/ca.crt'.
information/cli: Generating new CSR in '/etc/icinga2/pki/
icinga.infokom.local.csr'.
information/base: Writing private key to '/etc/icinga2/pki/
icinga.infokom.local.key'.
information/base: Writing certificate signing request to '/etc/icinga2/pki/
icinga.infokom.local.csr'.
information/cli: Signing CSR with CA and writing certificate to '/etc/icinga2/pki/
icinga.infokom.local.crt'.
information/cli: Copying CA certificate to '/etc/icinga2/pki/ca.crt'.
Generating master configuration for Icinga 2.
information/cli: Adding new ApiUser 'root' in '/etc/icinga2/conf.d/api-users.conf'.
information/cli: Enabling the 'api' feature.
Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
information/cli: Created backup file '/etc/icinga2/zones.conf.orig'.
Please specify the API bind host/port (optional):Press Enter
Bind Host []: Press Enter
Bind Port []: Press Enter
information/cli: Created backup file '/etc/icinga2/features-available/api.conf.orig'.
information/cli: Updating constants.conf.
information/cli: Created backup file '/etc/icinga2/constants.conf.orig'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
Done.
 
# check the output if it's OK  
egrep 'NodeName|TicketSalt' /etc/icinga2/constants.conf 
mcedit /etc/icinga2/zones.conf 
# change the string NodeName to your FQDN, in my case:
cat /etc/icinga2/zones.conf
object Endpoint "icinga.infokom.local" {
}
object Zone ZoneName {
endpoints = [ "icinga.infokom.local" ]
}
 
systemctl restart icinga2.service
# to add my first client server named s2 i need a token
icinga2 pki ticket --cn 's2.infokom.local'
# On the client server:
yum install -y epel-release
rpm --import http://packages.icinga.org/icinga.key
wget http://packages.icinga.org/epel/ICINGA-release.repo -O /etc/yum.repos.d/ICINGA-release.repo
yum makecache
yum install icinga2 mc
setenforce 0 # :( 
mcedit /etc/selinux/config
>> change enabled to SELINUX=disabled or SELINUX=permissive
icinga2 node wizard
Welcome to the Icinga 2 Setup Wizard!

We'll guide you through all required configuration details.

Please specify if this is a satellite setup ('n' installs a master setup) [Y/n]:Enter
Starting the Node setup routine...
Please specifiy the common name (CN) [s2.infokom.local]: Enter
Please specifiy the local zone name [
s2.infokom.local]: Enter
Please specify the master endpoint(s) this node should connect to:Enter
Master Common Name (CN from your master setup): icinga.infokom.local
Do you want to establish a connection to the master from this node? [Y/n]: y
Please fill out the master connection information:Enter
Master endpoint host (Your master's IP address or FQDN): 192.168.183.235
Master endpoint port [5665]: Enter
Add more master endpoints? [y/N]: Enter
Please specify the master connection for CSR auto-signing (defaults to master endpoint host):Enter
Host [192.168.183.235]: Enter
Port [5665]: Enter
information/base: Writing private key to '/etc/icinga2/pki/
s2.infokom.local.key'.
information/base: Writing X509 certificate to '/etc/icinga2/pki/
s2.infokom.local.crt'.
information/cli: Generating self-signed certifiate:
information/cli: Fetching public certificate from master (192.168.183.235, 5665):

information/cli: Writing trusted certificate to file '/etc/icinga2/pki/trusted-master.crt'.
information/cli: Stored trusted master certificate in '/etc/icinga2/pki/trusted-master.crt'.

Please specify the request ticket generated on your Icinga 2 master.
(Hint: # icinga2 pki ticket --cn 's2.infokom.local'): faaec3b98221622841cc437ee74b09a1f44b1ab
information/cli: Processing self-signed certificate request. Ticket 'faaec3b98221622841cc437ee74b09a1f44b1ab'.

information/cli: Created backup file '/etc/icinga2/pki/
s2.infokom.local.crt.orig'.
information/cli: Writing signed certificate to file '/etc/icinga2/pki/
s2.infokom.local.crt'.
information/cli: Writing CA certificate to file '/etc/icinga2/pki/ca.crt'.
Please specify the API bind host/port (optional):Enter
Bind Host []: Enter
Bind Port []: Enter
Accept config from master? [y/N]: y
Accept commands from master? [y/N]: y
information/cli: Disabling the Notification feature.
Disabling feature notification. Make sure to restart Icinga 2 for these changes to take effect.
information/cli: Enabling the Apilistener feature.
Enabling feature api. Make sure to restart Icinga 2 for these changes to take effect.
information/cli: Created backup file '/etc/icinga2/features-available/api.conf.orig'.
information/cli: Generating local zones.conf.
information/cli: Dumping config items to file '/etc/icinga2/zones.conf'.
information/cli: Created backup file '/etc/icinga2/zones.conf.orig'.
information/cli: Updating constants.conf.
information/cli: Created backup file '/etc/icinga2/constants.conf.orig'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
information/cli: Updating constants file '/etc/icinga2/constants.conf'.
Done. 

# to check
grep 's2' /etc/icinga2/constants.conf
mcedit /etc/icinga2/zones.conf 
# change NodeName to your local machine name, in my case it's FQDN
mcedit /etc/icinga2/zones.conf
object Endpoint "icinga.infokom.local" {
host = "192.168.183.235"
port = "5665"
}
object Zone "master" {
endpoints = [ "icinga.infokom.local" ]
}
object Endpoint "s2.infokom.local" {
}
object Zone ZoneName {
endpoints = [ "s2.infokom.local" ]
parent = "master"
}


service icinga2 restart && service icinga2 enable
# wait a bit and back to the icinga server:
icinga2 node list 
# you SHOULD see your client server NOW
Node 's2.infokom.local' (last seen: Wed Jul 27 09:36:11 2016)
* Host 's2.infokom.local'
* Service 'apt'
[...]
 
icinga2 node update-config
systemctl reload icinga2.service 
Open your web GUI and see your new server, it's in PENDING state now. Wait a bit or click on CHECK NOW button in the 
CHECK EXECUTION section.
 


Connect your Jira instance to a HipChat

$
0
0
Last year I got the chance to manage an Atlassian Jira and Confluence server. That was fun so far. But last week I was given a new task: fire up a HipChat instance and connect it with Jira. I wasted some days figuring out what to do with that exactly so to anyone getting here with Google: you are so lucky that I can tell you everything that you never find in any Atlassian docs. Here are the steps I have done.
1: download your HipChat  VM instance and import it to a Vmware host. (Change RAM, NIC etc. settings according your needs.)
2: Start, login with admin / hipchat into your console (to su, type: sudo /bin/dont-blame-hipchat)
3: Set your fix IP networking with such a command:  hipchat network -m static -i 192.168.100.20 -s 255.255.255.0 -g 192.168.100.254 -r 192.168.100.254
4: Open your /etc/hosts for edit and enter: 192.168.100.20 hipchat hipchat.mynetwork.local
5: In your nameserver set a new record for hipchat, e.g. hipchat.mynetwork.local (192.168.100.20)
6/a: generate a self signed SSL certificate
6/b: request a certificate from an external cert provider (see below *)
7: Finish your HC install using your (trial) licence and this certificate. (Certificate and hostname can be changed later)
8: Install HipChat connect Add-On in your Jira
9: Here comes the tricky part that drove me nuts. One can't simply force Jira connect to Hipchat because of Java engine in Jira won't trust HipChat's cert by default. You will notice that if you check catalina.out logfile in Jira: cat /opt/atlassian/jira/logs/catalina.out :
 /rest/hipchat/integration/latest/installation/complete [c.a.p.hipchat.rest.HipChatLinkResource] javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target

So you have two choices.
First: manually add your cert to the trusted java store. Get your server public key, detailed here. Once got your pub key into a file, execute this command: (check your paths ofcoz')
/opt/atlassian/jira/jre/bin/keytool -import -alias hipchat.mighty.org -keystore /opt/atlassian/jira/jre/lib/security/cacerts -file /certs/mypubhipchat.crt
It asks you for a password. What the heck, what kind of password, you might ask! That is the default password for Java cert storage and hopefully nobody changed it in your system, so enter: changeit for password.

Second method: install SSL for Jira add-on. It's easier.

See attached srceenshot: it assists you installing the server cert. It creates an updated but temporary java keystore file and you have to copy it in place of the production keystore later and then restart the whole Jira.

10. Success ! (almost..)




* 7/b: in this case you'll need an external FQDN so have to own a domain name. So for example if you own mighty.org domain name, do the following:
- create a CSR for hipchat.mighty.org with your favorite linux home system.
- request a trusted certificate at a trusted 3rd party cert provider for hipchat.mighty.org
- in your INTERNAL(!) nameserver, create a new zone called hipchat.mighty.org and assing 192.168.100.20 to its @ value.



SCCM in my test lab

$
0
0
OK that's not a big deal for anyone but for me it was a three day long battle with lots of dead-ended installs, undo's and redo's. So, at long last this is the famous screen I wanted to see so much! All green! /me happy now, thanks Prajwal Desai


A returning to this blog

$
0
0
Just a small script to myself to remember. An elegant and playful way to internally daily backup a jira+confluence+gitlab machine - and avoid all the "unlikely happen" risks.

#!/bin/bash
BACKUPLOG=/var/log/backuplog
exec >  >(tee -ia $BACKUPLOG)
exec 2> >(tee -ia $BACKUPLOG >&2)
if [ ! -f /backup/MOUNTED ]; then  # temp solution for further use
    echo FATAL_BACKUP_NOT_MOUNTED >> $BACKUPLOG
    exit 1
fi

date
echo BACKUP_STARTED

# CONFLUENCE
MYPATH=/var/lib/confluence/backups
FILE=backup-`date +%F|sed 's/-/_/g'`
cp $MYPATH/$FILE.zip /backup/confluence
[[ `ls $MYPATH|wc -l` -gt 15 ]] && find $MYPATH -mtime +15 -delete # purge old backups only if there are new ones !
[[ `ls /backup/confluence|wc -l` -gt 60 ]] && find /backup/confluence/ -type f -mtime +60 -delete

#JIRA
MYPATH=/var/lib/jira/export/
# another nice way
rsync -avh $MYPATH /backup/jira/ # no autodelete!
[ $? -ne 0 ] && echo RSYNC_ERROR_IN_BACKUP # temp set for further use
[[ `ls $MYPATH|wc -l` -gt 41 ]] && find $MYPATH -type f -mtime +20 -delete # 2 backups daily! purge old backups only if there are new ones !
[[ `ls /backup/jira|wc -l` -gt 120 ]] && find /backup/jira -mtime +60 -delete
tar -czf /backup/jira/$FILE-data.tgz /var/lib/jira/data

# MYSQL SIMPLE MIRROR BACKUP
rsync -avh --delete /var/lib/automysqlbackup/ /backup/mysql/
sleep 3

# GITLAB
/opt/gitlab/bin/gitlab-rake gitlab:backup:create
sleep 3
mv /var/opt/gitlab/backups/* /backup/gitlab/

# etc
rdiff-backup /etc /backup/etc
rdiff-backup --remove-older-than 4W /backup/etc
echo BACKUP_ENDED
date

The RPC server is unavailable

$
0
0
Error 0x000006BA enumerating sessionnames
Error [1722]: The RPC server is unavailable.

Ever faced this error when tried to connect to a Windows 2012 R2 server from remote to query something ? Setting up an exception for RPC in the firewall may look easy. But... in fact, it isn't. See: Win7/2008 or Windows 10/Server 2016.
Luckily for you, for Server 2012 R2 I give you the clue!
Just enable this pre-definied rule:
Remote Service Management (NP-In)
Viewing all 91 articles
Browse latest View live