Dovecot shell scripts

Dovecot and “aws” shell scripts | Archival Integration with MS Exchange

"aws" command line setups

We use the Amaxon “aws” command to access the email S3 bucket.

In a prior learning attempt, I used s3fs (pseudo NFS mount) to access the email files. This is not reliable but it can help when developing shell scripts to quickly observe file behaviour.

Verify “aws” is installed: (It should be. If not, do an internet search on installing aws commands.)

aws help

Configure aws for your region:

You will previously as part of prerequisites, have created an IAM “user” entry for EC2 and S3 bucket access. This has your public and private key.

For example, under IAM Users, I have this entry: (it may not need all this, but this works)

xxxxxxx (my user name):
AdministratorAccess (Some configs may previously show “AdminAccess” depending on when you created them on the platform.)
AmazonS3FullAccess
AWSLambda_FullAccess
CloudWatchFullAccess

When creating the user, we are given ACCESS Keys (public and private). We must never lose these. (We could but there would need to be new configurations made where we have used them before.)

Then, use these commands for your region and keys:

cd ~
ls -la

[you should not see the folder .aws]

mkdir .aws
vi config
[add 6 lines only, two of which are blank lines at the end. You must have two blank lines.]
[default]
region=ap-southeast-2
aws_access_key_id=USER_ACCESS_KEY
aws_secret_access_key=USER_ACCESS_PRIVATE_KEY


[save and exit]
[now run the aws configure command and simply press the Enter key after each prompt]

aws configure
AWS Access Key ID [****************LKVG]: 
AWS Secret Access Key [****************Wcae]: 
Default region name [ap-southeast-2]: 
Default output format [None]: 

[Validate you can access a bucket. With one of your own buckets, try: (in this example, there is already a bucket called domain.au.inbox)]

aws s3 ls s3://domain.au.inbox/

[If the command works, there will be no error trying to access the bucket. We can now proceed with a shell script.]

Under IAM > Roles, you will previously have made a role (or do so now) that lets your EC2 instance access buckets.
For example:

yyyyyyy (my IAM role name. Later it will be in the IAM role’s list and show it as “AWS Service: ec2”.):
AdministratorAccess
AmazonS3FullAccess
CloudWatchFullAccess

Now, I am not an expert on IAM, and some of my configurations are historic. I did have an IAM group with Admin and S3 full access, and I made my user part of that group. Anyway, once you have your IAM Role configured, you make sure in the EC2 console it is attached to the instance:

EC2 (Sydney for Australia) > Actions > Security > Modify IAM Role —-> then add it.

 

Important note:

When you previously created your email bucket (with public access) – in Australia’s region, then added your SES Email Rule(s), the bucket would have needed permissions to do so.

That is, in the permissions tab of your bucket would have this code in it:

(where domain.au.inbox is replaced with your own bucket name)

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSESPuts",
            "Effect": "Allow",
            "Principal": {
                "Service": "ses.amazonaws.com"
            },
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::domain.au.inbox/*",
            "Condition": {
                "StringEquals": {
                    "AWS:SourceAccount": "839054678433"
                },
                "StringLike": {
                    "AWS:SourceArn": "arn:aws:ses:*"
                }
            }
        }
    ]
}

photographybyshaw.au

photographybyshaw.au

Shell Scripts and Crontab - SES/S3

We have previously built the structure for /data/USER_NAME/Maildir/ and soft linked /home/USER_NAME/Maildir to it.
When we first use an email Client with Dovecot, it creates the Maildir/new directory.

Our bucket must have .archive and .dmarc subfolder so the scripting below does not fail.

email-in.sh

Notes: I have given my full script, which can be cut down. I look to see if an email was from a previosu archive, if so, I insert the “¹” superscript into the subject line.
If that character already exists, I do not do it again – such as from a forwarded email containing the archive. Forwarded emails back to you are of course archived.
The script checks if it is a dmarc file from Google or Amazon and puts it in the .dmarc folder.
Else, the email is sent to your Dovecot Maildir/new directory, and a copy is base64 encrypted into the .archive folder.
A .CSV file is appended with some metadata in /data/fred. YOu can run a private web page to download this file, or manually view/download it.

We then create a scrip to place archived files back into your inbox for viewing.

You must replace the ^M string below with a single character:
Cut and paste the script below, then edit it. Replace the ^M characters with a single character using the Unix special character sequence.
This is done by typein CONTROL-V followed by M. It is in three places in the Register section below. This is becuase Linux 2023 inserts ^M characters at the ends of line after using file manipulation. I have never seen this in Unix, so it has “crept” in here for some reason.

cd /home/ec2-user

vi email-in.sh

#!/bin/sh

# $1 e-mail bucket - e.g. fred.domain.au.inbox
# $2 user name - e.g. fred
# $3 user encryption password - always keep once in use - e.g. deputyD0g - avoid ! character unless \! escaped
# Subject Symbol - ¹ archive (retreived via email-out.sh archived emails)

# --- BEGIN LOOP ALL FILES ---

for i in `aws s3 ls s3://$1 | grep -v .archive | grep -v .dmarc| awk '{print $NF}'`
do
        # --- BEGIN TIMESTAMP ---
        # get timestamp for each file
        dat=""
        size=""
        time=""
        dat=`aws s3 ls s3://$1/$i|grep -v .archive|awk '{print $1, $2, $3}'`
        time=`echo $dat | awk '{print $1, $2}'`
        size=`echo $dat | awk '{print $3}'`
        # --- END TIMESTAMP ---

# --- BEGIN PRELIMINARY ---

# Copy original file to /data/user(g.e. fred) so we can work with it
# Then encrypt/copy original file to .archive without alteration, except adding superscript "¹" to indicate it is archived
# When any email is processed it is checked to see it is not already from an archive so that the "¹" superscript is not added again
# This means we choose not to put an archive character into forwarded emails that were previosusly archived.

k=$i".enc"
o=$i".o"
aws s3 cp s3://$1/$i /data/$2/$i
aws s3 rm s3://$1/$i

# only process normally if not a dmarc report
m=""
m=`grep -i "Report Domain" /data/$2/$i`
if [ "$m" = "" ] ;
then

# --- END PRELIMINARY --- WHERE $i is what we work with after our archive section is finished

# --- BEGIN ARCHIVE SECTION --- 

cp -p /data/$2/$i /data/$2/$o
p=""
p=`grep 'Subject:' /data/$2/$o | grep -v "DKIM" | grep -v "Subject:Message-ID" | grep -v "Subject:Date:" |  grep -E -i '¹'`
if [ "$p" = "" ] ;
then
 # add the "¹" superscript as it is not present
 sed '0,/Subject: /{s//Subject: ¹ /}' /data/$2/$o | /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -out /data/$2/$k
 aws s3 cp /data/$2/$k s3://$1/.archive/$k
 # bucket does not permit us to change the file ownership in .archive
 rm /data/$2/$o /data/$2/$k
else
 # if superscript was previously added, just archive the file (even if from a newly forwarded email)
 /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -in /data/$2/$o -out /data/$2/$k
 chown $2 /data/$2/$k
 chgrp $2 /data/$2/$k
 aws s3 cp /data/$2/$k s3://$1/.archive/$k
 rm /data/$2/$o /data/$2/$k
fi

# --- END ARCHIVE SECTION --- 

# We do not test for tracking or base64 tracking content.

# --- BEGIN REGISTER --- append register.csv file - collation of all metadata
# We append the aws filename and date, all From: and Subject: strings

from=""
from=`grep "From: " /data/$2/$i|awk '{print $NF}'|sed 's/^M//g'`
subject=""
subject=`grep "Subject: " /data/$2/$i|sed 's/,/-/g'|awk -F: '{print $NF}'|sed 's/^M//g'`
to=""
to=`grep "To:[ ]" /data/$2/$i|sed 's/,/-/g'|awk -FTo: '{print $NF}'|sed 's/^M//g'`
echo $time , $i , $from, $to, $subject >> /data/$2/register-$2.csv

# --- END REGISTER ---

# --- Place email into the inbox ---

chown $2 /data/$2/$i
chgrp $2 /data/$2/$i

mv /data/$2/$i /data/$2/Maildir/new/$i

# --- END LOOP ALL FILES ---

else
aws s3 mv /data/$2/$i s3://$1/.dmarc/$i.eml
fi
# end if it was a dmarc report

done

exit

[save and exit]

chmod 775 email-in.sh
chown root email-in.sh
chgrp ec2-user email-in.sh

cp -p email-in.sh email-in-30.sh

vi email-in-30.sh

[at the top of the file below the first comments, simpy insert this command:]

sleep 30

[save and exit]

Test by sending an email from your regular email address to the new Dovecot address, and look in the S3 console’s bucket to see the email arrives.
In our example the buxket is called snotbat.com.inbox and fred is the user, with encryption password deputyD0g.

If we have an email in the bucket, we can manually run the command with some details:

cd /home/ec2-user
sh -x ./email-in.sh snotbat.com.inbox fred deputyD0g

[This is the type of output:]

++ aws s3 ls s3://domain.au.inbox
++ awk '{print $NF}'
+ for i in `aws s3 ls s3://$1 | awk '{print $NF}'`
+ aws s3 cp s3://domain.au.inbox/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381 /data/fred/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381
download: s3://domain.au.inbox/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381 to ../../data/fred/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381
+ chown email /data/fred/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381
+ mv /data/fred/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381 /data/fred/Maildir/new/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381
+ aws s3 rm s3://domain.au.inbox/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381
delete: s3://domain.au.inbox/igi8nlf4bq4qlrj3kmu20plvu89qjmpb1ug36381
+ exit

You can see the Dovecot file with some file renaming under /data/fred/Maildir/cur.

There you can use a viewer or editor to see the headers (which will not show complete in an email Client).

For instance: (an extract below showing dkim, spf, dmarc)

cd /data/fred/Maildir/cur
ls

df0d4njqfm2oksn144odmb38ss3bpji5bjgn4j01:2,S

more df0d4njqfm2oksn144odmb38ss3bpji5bjgn4j01:2,S

X-SES-Spam-Verdict: PASS
X-SES-Virus-Verdict: PASS
Received-SPF: pass
spf=pass
dkim=pass header.i=
dmarc=pass header.from=

Let’s add this script to crontab:

[You are always logged in as root in my examples]
crontab -e
[press the "i" key to insert, as we are defaulted in my examples to the vi editor]
* * * * * /home/ec2-user/email-in.sh snotbat.com.inbox fred deputyD0g >/dev/null 2>&1
* * * * * /home/ec2-user/email-in-30.sh snotbat.com.inbox fred deputyD0g >/dev/null 2>&1

[save and exit]

[You can have a single entry if you wish, or extend the crontab timing fo once every 5 minutes like this:]

*/5 * * * * /home/ec2-user/email-in.sh snotbat.com.inbox fred deputyD0g >/dev/null 2>&1

[review your entries:]

crontab -l

 

The above script inserts the ASCII character “¹” into the Subject line so that if you restore it at a future time, the superscript “¹” reminds you it is an archived file. You could use any ASCII character, symbol, or word. “¹” is unobtrusive.

As crontab is built into the OS kernel, there is no issue using it.

Now send yourself an email and manually look in the bucket’s .archive directory.

Now that we have an archive, we can put retention on it:

Under your S3 Bucket name, go to the Management tab, and click on Life Cycle. Create a rule, any name you wish, but in the prefix field, type:

.archive/

This must have the forward slash, and we ensure we apply only to this prefix. See below:

photographybyshaw.au

Now we write a script to retrieve archived files. These are placed back into the inbox folder. The file is placed directly into the Maildir/new directory, so there is no doubling up of the special character.

cd /home/ec2-user
vi email-archive.sh

#!/bin/sh

echo " "
echo "-------------------------------------------------------------------------------------------------------------"
echo " "
echo "When entering your search string, . represents ALL files in the bucket, which can be a lot"
echo "A string of characters followed by . represents a search on those starting characters"
echo "For example, k. returns all files starting with k while k0. returns all files starting with k0"
echo "You may search groups, such as ci5. m2. and so on"
echo "Please do not use other wildcard characters for the search. Your S3 Bucket lists the .archive files you need"
echo "The user archive password is not necessarily the user password, but is the encryption password always used"
echo "If your encryption password has the ! character, escape it with the \ character - e.g. g00!fy needs g00\!fy"
echo "This program will first attempt to list the files you search for, and will then ask to proceed"
echo " "
echo "-------------------------------------------------------------------------------------------------------------"
echo ""
read -p "[Enter Bucket Name           ]: " bucket
read -p "[Enter User Name             ]: " user
read -p "[Enter User Archive Password ]: " password
read -p "[Enter Search String         ]: " search

 # for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$search`
  IFS=$'\n'
  for f in `echo $search|awk '{printf $1;printf "\n" $2;  printf "\n" $3; printf "\n" $4; printf "\n" $5; printf "\n" $6; printf "\n" $7;}'`
  do
  for i in `aws s3 ls s3://$bucket/.archive/ | grep ".enc"|sort -n|awk '{print $4,$1,$2,$3}'|grep ^$f`
   do
   echo $i
   done
  done

 answer=""
 read -p "[Proceed? (y for yes)       ]: " answer

 if [ "$answer" = "y" ] ;
 then
 # for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$search`
 for f in `echo $search|awk '{printf $1;printf "\n" $2;  printf "\n" $3; printf "\n" $4; printf "\n" $5; printf "\n" $6; printf "\n" $7;}'`
  do
  for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$f`
  do
 echo $i
 aws s3 cp s3://$bucket/.archive/$i /data/$user
 /usr/bin/openssl aes-256-cbc -d -a -salt -pbkdf2 -k $password -in /data/$user/$i > /data/$user/Maildir/new/$i.eml
 rm /data/$user/$i
 done
 done
fi

exit


This script allows you to use up to seven search or matching strings on the user input. This is not normal Unix wildcard searching. It is “grep” searching. As you can see, you have to use a terminal shell login to run the script. If someone knows a way to do this from a web page, I’d like to know, as it involves reading input.

In our example above, we could find the file df0d4njqfm2oksn144odmb38ss3bpji5bjgn4j01 by typing df. or df0d. and so on.
If we type ., we get all archive files (not recommended). If we type things like d. c9. and so on, we get various unique combinations.

Here is example output:

cd /home/ec2-user
./email-archive.sh
 
-------------------------------------------------------------------------------------------------------------
 
When entering your search string, . represents ALL files in the bucket, which can be a lot
A string of characters followed by . represents a search on those starting characters
For example, k. returns all files starting with k while k0. returns all files starting with k0
You may search groups, such as ci5. m2. and so on
Please do not use other wildcard characters for the search. Your S3 Bucket lists the .archive files you need
The user archive password is not necessarily the user password, but is the encryption password always used
If your encryption password has the ! character, escape it with the \ character - e.g. super!d0ggy needs super\!d0ggy
This program will first attempt to list the files you search for, and will then ask to proceed
 
-------------------------------------------------------------------------------------------------------------

[Enter Bucket Name           ]: domain.au.inbox
[Enter User Name             ]: fred
[Enter User Archive Password ]: superD0ggy2000
[Enter Search String         ]: .
pl7nqu2mnodkpg175g40nsodf6c24k67lftmglg1.enc 2023-03-20 14:52:34 19695
[Proceed? (y for yes)       ]: 

The .CSV spreadsheet we created in /data/fred helps us decide what files we want to retrieve.

You may wish to manually edit the register file to add a top column with headings like:
DATE TIME, AWS FILENAME, FROM, TO, SUBJECT

How to download the .csv file? Once can use FileZilla of course, but if you have a password protected web page, you can have a button with this URL:

https://domain.au/data/fred/register-fred.csv

Then in your web site’s root directory, e.g. under /var/www/html, create a softlink:

cd /var/www/html
ln -s /data data
ls -l

[output like this:]

lrwxrwxrwx   1 root   apache     5 Mar 12 19:41 data -> /data

As mentioned, MIME is not good for this kind of work. As an example, where we search for the “To: ” strings, if it is forwarded from an email that was from gmail at some point in the history, that content will be encrypted, so we cannot record it ever came from gmail in the .csv file, unless we look at the content in the email client.

It would be nice to identify tracking, but some tracking is important, such as in electronic document signing. And even if we could notify of tracking, we can’t do so from any base64 encrypted content.

Further Security

This script shows the unknown IP addresses that try to log into the system:
Use your own IP addresses

cd /home/ec2-user
vi mail_abuse.sh

#!/bin/sh
echo UNKNOWN EMAIL CONNECTION ATTEMPTS
echo ""
cd /var/log
grep unknown mail.log|awk '{print $NF}'|grep -v YOUR_EC2_IP_ADDRESS|grep -v YOUR_OWN_STATIC_IP_ADDRESS|grep unknown|sort -u|awk -F[ '{print $2}'|awk -F] '{print $1}'
echo ""
exit

[save and exit]

chmod 777 mail_abuse.sh

./mail_abuse.sh

We can extend protection by removing bad PTR and hostnames.

For example, in the above ip addresses you can use the command “hostname” followed by the IP address to check the status of that ip address.

This configuration adds more protection in /etc/postfix/main.cf.
You must comment out the previous line smtpd_recipient_restrictions as we add it again below with more options.
Refer: www.linuxbabe.com/mail-server/block-email-spam-postfix
This article includes whitelisting instructions if you need it

smtpd_sender_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unknown_sender_domain, reject_unknown_reverse_client_hostname, reject_unknown_client_hostname
smtpd_helo_required = yes
smtpd_helo_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_invalid_helo_hostname, reject_non_fqdn_helo_hostname, reject_unknown_helo_hostname
smtpd_recipient_restrictions = 
   permit_sasl_authenticated,
   permit_mynetworks,
   reject_unauth_destination,
   permit_sasl_authenticated,
   check_policy_service unix:private/policyd-spf,
   check_policy_service inet:127.0.0.1:10023,
   reject_rhsbl_helo dbl.spamhaus.org,
   reject_rhsbl_reverse_client dbl.spamhaus.org,
   reject_rhsbl_sender dbl.spamhaus.org,
   permit_dnswl_client list.dnswl.org=127.0.[0..255].[1..3],
   permit_dnswl_client swl.spamhaus.org,
   reject_rbl_client zen.spamhaus.org
smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, defer_unauth_destination

Just append the line to /etc/postfix/main.cf, and restart postfix.

photographybyshaw.au

photographybyshaw.au

MS Exchange - Copy emails to S3 Bucket

It is possible to forward a copy of all MS Exchange INBOX emails to an Amazon S3 bucket.

In MS Exchange > Admin Center > Mail Flow > Remote Domains we add the domain we want to forward to.
We uncheck the various options for replies and reports. We do check the “allow automatic forwarding” box, and request the forwarding keeps the original copy with MS Exchange, so that it goes to our MS Exchange email address as per normal.

We do not need to add the remote email address as an alias to the primary email in MS Exchange > Admin Center > Mailboxes > Email Addresses.

In Microsoft 365 Defender (under the Security Admin section) we add a custom policy:

Defender > Policies & rules > Threat policies > Anti-spam policies > Create policy.

Give it a title such as Custom Outbound Mail Forward. User field is your pimrary email address (a popdown searched for it as you type it), Groups is null, and domains is your MS Exchange primary domain (not the remote domain bucket). You Protecting settings will be:
0
0
0
Restrict the user from sending mail until the following day (this is a default – you could try editing later when you know it works)
Forwarding Rules: Automatic forwarding rules: “On – Forwarding is enabled”. Notifications are null.
This may save but give an error message first time around, saying you have to configure permissions and wait up to 48 hours to retry.

Please see these materials to help after you get this error:

learn.microsoft.com/en-us/microsoft-365/security/office-365-security/anti-malware-protection-about?view=o365-worldwide
jasoncoltrin.com/2021/12/20/how-to-fix-550-5-7-520-access-denied-your-organization-does-not-allow-external-forwarding/
learn.microsoft.com/en-us/exchange/recipients/user-mailboxes/email-forwarding?view=exchserver-2019
learn.microsoft.com/en-us/exchange/recipients-in-exchange-online/manage-user-mailboxes/configure-email-forwarding
learn.microsoft.com/en-us/exchange/mail-flow-best-practices/remote-domains/manage-remote-domains

You can also look for Defender to remove emails that have commonly used bad attachments, and hunt around for adding further security in MS Exchange in the policy areas.

You can also review your security score in Azure as another job to do, and make some changes to the recommendations by saying you use 3rd parties (implying Google authentication) and “planned”, which improves your score.

 

Here is the bad attachment screen:

photographybyshaw.au

____________________________________________________________________________________________________________________________

The S3 bucket will have been set up and working with SES for our remote domain name specified above. That means DNS records exist somewhere – either in R53 or Google Domains or where ever. The SES email rules only need to place the email into the Bucket. We are not using Dovecot/IMAP as it makes n sense to do so. We can place 365 day retention on the bucket. If we want the remote domain to process emails in some further way, we would need an EC2 instance with a crontab shell script to do various things to the bucket email objects (files). Using the aws commands, we don’t need to have an EC2 instance dedicated to the remote domain. If we were using dovecot with the domain, then we would configure a Linux instance with dovecot. There is no need to as the idea is simply to have an archive backup of all INBOX emails in a convenient way.

One can use a script (editing one of my previous scripts) to place files into a subfolder such as ./inbox adding a .eml file extension. Then you can manually use MSP360 Explorer to copy emails to you PC and drage & drop them into a client like eM Client, to its local folder Inbox to view as many historical emails as you wish. Or you could use my scripts to keep a register.csv file and search for a particular sender and retreive those file names.

We don’t need to do much as the idea is theat normal email gets annoying as it fills up, so we tend to delete many emails. If we have deleted one we really want back, it will be in the S3 bucket.

The scripts below were built on the basis of having several subfolders in the S3 bucket with dovecot up and running. I modified this so that only files are processed to go to the .inbox, removing the base64 encryption etc. and I do not have dovecot, saslauthd, postfix running. DMARC files are processed, as my DNS entries set up DKIM, in case I want to use the remote domain as an email service.

MS Exchange needs 24 to 48 hours when you first attempt to change Organization Settings, so you do some steps that you come back to the next day until those configurations work.

The S3 bucket now contains these subdirectories:

.archive/
.BAK/
.dmarc/
.inbox/
.restore
.sent/

These are mainly for my own testing or future work. In this scenario I am trasnferring files to /data/user, and using aws commands to get the .eml files into .inbox/ and deleting them from the root folder. My scripts still proces dmarc files to .dmarc/

I have expire policies on the subfolders.

The shell scripts are: (remember, the ^M character cannot be manually copied. It is created using the unix “control-v m” sequence.)

email-in.sh as though dovecot is working:

#!/bin/sh
# $1 e-mail bucket - e.g. fred.domain.au.inbox
# $2 user name - e.g. fred
# $3 user encryption password - always keep once in use - e.g. deputyD0g - avoid ! character unless \! escaped
# Subject Symbol - ¹ archive (retreived via email-out.sh archived emails)
# --- BEGIN LOOP ALL FILES ---
for i in `aws s3 ls s3://$1 | grep -v .archive | grep -v .dmarc | grep -v .sent | grep -v .BAK | grep -v .restore | grep -v .inbox | awk '{print $NF}'`
do
        # --- BEGIN TIMESTAMP ---
        # get timestamp for each file
        dat=""
        size=""
        time=""
        dat=`aws s3 ls s3://$1/$i|grep -v .archive | grep -v .dmarc | grep -v .sent | grep -v .BAK | grep -v .restore| grep -v .inbox |  awk '{print $1, $2, $3}'`
        time=`echo $dat | awk '{print $1, $2}'`
        size=`echo $dat | awk '{print $3}'`
        # --- END TIMESTAMP ---
# --- BEGIN PRELIMINARY ---
# Copy original file to /data/user(g.e. fred) so we can work with it
# Then encrypt/copy original file to .archive without alteration, except adding superscript "¹" to indicate it is archived
# When any email is processed it is checked to see it is not already from an archive so that the "¹" superscript is not added again
# This means we choose not to put an archive character into forwarded emails that were previosusly archived.
k=$i".enc"
o=$i".o"
aws s3 cp s3://$1/$i /data/$2/$i
aws s3 rm s3://$1/$i
# only process normally if not a dmarc report
m=""
m=`grep -i "Report Domain" /data/$2/$i`
if [ "$m" = "" ] ;
then
# --- END PRELIMINARY --- WHERE $i is what we work with after our archive section is finished
# --- BEGIN ARCHIVE SECTION --- 
cp -p /data/$2/$i /data/$2/$o
p=""
p=`grep 'Subject:' /data/$2/$o | grep -v "DKIM" | grep -v "Subject:Message-ID" | grep -v "Subject:Date:" |  grep -E -i '¹'`
if [ "$p" = "" ] ;
then
 # add the "¹" superscript as it is not present
 sed '0,/Subject: /{s//Subject: ¹ /}' /data/$2/$o | /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -out /data/$2/$k
 aws s3 cp /data/$2/$k s3://$1/.archive/$k
 # bucket does not permit us to change the file ownership in .archive
 rm /data/$2/$o /data/$2/$k
else
 # if superscript was previously added, just archive the file (even if from a newly forwarded email)
 /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -in /data/$2/$o -out /data/$2/$k
 chown $2 /data/$2/$k
 chgrp $2 /data/$2/$k
 aws s3 cp /data/$2/$k s3://$1/.archive/$k
 rm /data/$2/$o /data/$2/$k
fi
# --- END ARCHIVE SECTION --- 
# We do not test for tracking or base64 tracking content.
# --- BEGIN REGISTER --- append register.csv file - collation of all metadata
# We append the aws filename and date, all From: and Subject: strings
from=""
from=`grep "From: " /data/$2/$i|awk '{print $NF}'|sed 's/^M//g'`
subject=""
subject=`grep "Subject: " /data/$2/$i|sed 's/,/-/g'|awk -F: '{print $NF}'|sed 's/^M//g'`
to=""
to=`grep "To:[ ]" /data/$2/$i|sed 's/,/-/g'|awk -FTo: '{print $NF}'|sed 's/^M//g'`
echo $time , $i , $from, $to, $subject >> /data/$2/register-$2.csv
# --- END REGISTER ---
# --- Place email into the inbox ---
chown $2 /data/$2/$i
chgrp $2 /data/$2/$i
cp -p /data/$2/$i /data/$2/Maildir/new/$i
aws s3 mv /data/$2/$i s3://$1/.inbox/$i.eml
# --- END LOOP ALL FILES ---
else
aws s3 mv /data/$2/$i s3://$1/.dmarc/$i.eml
fi
# end if it was a dmarc report
done
exit

Here is th emodified version only for using .inbox/

#!/bin/sh
# $1 e-mail bucket - e.g. fred.domain.au.inbox
# $2 user name - e.g. fred
# $3 user encryption password - always keep once in use - e.g. deputyD0g - avoid ! character unless \! escaped
# Subject Symbol - ¹ archive (retreived via email-out.sh archived emails)
# --- BEGIN LOOP ALL FILES ---
for i in `aws s3 ls s3://$1 | grep -v .archive | grep -v .dmarc | grep -v .sent | grep -v .BAK | grep -v .restore | grep -v .inbox | awk '{print $NF}'`
do
        # --- BEGIN TIMESTAMP ---
        # get timestamp for each file
        dat=""
        size=""
        time=""
        dat=`aws s3 ls s3://$1/$i|grep -v .archive | grep -v .dmarc | grep -v .sent | grep -v .BAK | grep -v .restore| grep -v .inbox |  awk '{print $1, $2, $3}'`
        time=`echo $dat | awk '{print $1, $2}'`
        size=`echo $dat | awk '{print $3}'`
        # --- END TIMESTAMP ---
# --- BEGIN PRELIMINARY ---
# Copy original file to /data/user(g.e. fred) so we can work with it
# Then encrypt/copy original file to .archive without alteration, except adding superscript "¹" to indicate it is archived
# When any email is processed it is checked to see it is not already from an archive so that the "¹" superscript is not added again
# This means we choose not to put an archive character into forwarded emails that were previosusly archived.
k=$i".enc"
o=$i".o"
aws s3 cp s3://$1/$i /data/$2/$i
aws s3 rm s3://$1/$i
# only process normally if not a dmarc report
m=""
m=`grep -i "Report Domain" /data/$2/$i`
if [ "$m" = "" ] ;
then
# --- END PRELIMINARY --- WHERE $i is what we work with after our archive section is finished

# --- BEGIN ARCHIVE SECTION --- 
# cp -p /data/$2/$i /data/$2/$o
# p=""
# p=`grep 'Subject:' /data/$2/$o | grep -v "DKIM" | grep -v "Subject:Message-ID" | grep -v "Subject:Date:" |  grep -E -i '¹'`
# if [ "$p" = "" ] ;
# then
 # add the "¹" superscript as it is not present
 # sed '0,/Subject: /{s//Subject: ¹ /}' /data/$2/$o | /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -out /data/$2/$k
 # aws s3 cp /data/$2/$k s3://$1/.archive/$k
 # bucket does not permit us to change the file ownership in .archive
 # rm /data/$2/$o /data/$2/$k
# else
 # if superscript was previously added, just archive the file (even if from a newly forwarded email)
 # /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -in /data/$2/$o -out /data/$2/$k
 # chown $2 /data/$2/$k
 # chgrp $2 /data/$2/$k
 # aws s3 cp /data/$2/$k s3://$1/.archive/$k
 # rm /data/$2/$o /data/$2/$k
# fi
# --- END ARCHIVE SECTION --- 


# We do not test for tracking or base64 tracking content.
# --- BEGIN REGISTER --- append register.csv file - collation of all metadata
# We append the aws filename and date, all From: and Subject: strings
from=""
from=`grep "From: " /data/$2/$i|awk '{print $NF}'|sed 's/^M//g'`
subject=""
subject=`grep "Subject: " /data/$2/$i|sed 's/,/-/g'|awk -F: '{print $NF}'|sed 's/^M//g'`
to=""
to=`grep "To:[ ]" /data/$2/$i|sed 's/,/-/g'|awk -FTo: '{print $NF}'|sed 's/^M//g'`
echo $time , $i , $from, $to, $subject >> /data/$2/register-$2.csv
# --- END REGISTER ---
# --- Place email into the inbox ---
chown $2 /data/$2/$i
chgrp $2 /data/$2/$i
# I will comment out putting emails into dovecot inbox and leave all emails to .inbox
# cp -p /data/$2/$i /data/$2/Maildir/new/$i
aws s3 mv /data/$2/$i s3://$1/.inbox/$i.eml
# --- END LOOP ALL FILES ---
else
aws s3 mv /data/$2/$i s3://$1/.dmarc/$i.eml
fi
# end if it was a dmarc report
done
exit

Here is the email-archive.sh that simply places emails into .restore/ which can be downlaoded to your PC from the MS360 Explorer app.

#!/bin/sh
echo " "
echo "-------------------------------------------------------------------------------------------------------------"
echo " "
echo "When entering your search string, . represents ALL files in the bucket, which can be a lot"
echo "A string of characters followed by . represents a search on those starting characters"
echo "For example, k. returns all files starting with k while k0. returns all files starting with k0"
echo "You may search groups, such as ci5. m2. and so on"
echo "Please do not use other wildcard characters for the search. Your S3 Bucket lists the .archive files you need"
echo "The user archive password is not necessarily the user password, but is the encryption password always used"
echo "If your encryption password has the ! character, escape it with the \ character - e.g. g00!fy needs g00\!fy"
echo "This program will first attempt to list the files you search for, and will then ask to proceed"
echo " "
echo "-------------------------------------------------------------------------------------------------------------"
echo ""
read -p "[Enter Bucket Name           ]: " bucket
read -p "[Enter User Name             ]: " user
read -p "[Enter User Archive Password ]: " password
read -p "[Enter Search String         ]: " search
 # for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$search`
  IFS=$'\n'
  for f in `echo $search|awk '{printf $1;printf "\n" $2;  printf "\n" $3; printf "\n" $4; printf "\n" $5; printf "\n" $6; printf "\n" $7;}'`
  do
  for i in `aws s3 ls s3://$bucket/.archive/ | grep ".enc"|sort -n|awk '{print $4,$1,$2,$3}'|grep ^$f`
   do
   echo $i
   done
  done
 answer=""
 read -p "[Proceed? (y for yes)       ]: " answer
 if [ "$answer" = "y" ] ;
 then
 # for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$search`
 for f in `echo $search|awk '{printf $1;printf "\n" $2;  printf "\n" $3; printf "\n" $4; printf "\n" $5; printf "\n" $6; printf "\n" $7;}'`
  do
  for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$f`
  do
 echo $i
 aws s3 cp s3://$bucket/.archive/$i /data/$user
 # modified for MS Exchange email copies only to the bucket
 # /usr/bin/openssl aes-256-cbc -d -a -salt -pbkdf2 -k $password -in /data/$user/$i > /data/$user/Maildir/$i.eml
 /usr/bin/openssl aes-256-cbc -d -a -salt -pbkdf2 -k $password -in /data/$user/$i > /data/$user/Maildir/new/$i.eml
 # modified for MS Exchage:
 # aws s3 mv /data/$user/Maildir/$i.eml s3://$bucket/.restore/$i.eml
 rm /data/$user/$i
 done
 done
fi
exit

MS Exchange - Copy emails to S3 Bucket

MS Exchange has been set according to the rules as it should have below, but it is not processing emails correctly at this stage. Please do not use the forwarding details below. However, the use of quarantine settings etc. should be good to look at. I’ll continue to research. Because it was working for a while, and then stopped, I may come back to this to try again down the track. I suspect that the Amazon “otherdomain.au” configurations have to have an actual imap service (i.e. dovecot) running for Microsoft to accept forwarding of emails. I’ll look at that.

 

If you wish to have copies of MS Exchange emails to your domain name copied to an S3 Bucket, it is possible if you have a website and any domain on Amazon EC2 that can access the buckets in your AWS account.

For instance, let’s say you created Route53 for otherdomain.au, and your primary website is mydomain.au with emails going to @mydomain.au via Route53 to MS Exchange. Say otherdomain.au has no website, so it has no SSL certificate. otherdomain.au needs a full set of Route53 DNS entries that include SES, as per normal practice as described in my articles.

Then, mydomain.au needs to include the Amazon MX record as priority 10, so MS Exchange keeps priority 0 on that record.

MS Exchange needs 24 to 48 hours when you first attempt to change Organization Settings, so you do some steps that you come back to the next day until those configurations work. I’ll go into this in a moment. Basically this will forward mydomain.au emails to otherdomain.au as copies.

You then configure MS Exchange to forward a copy of all emails to you@otherdomain.au. Essentially, you use SES Email Rules to simply place emails into an S3 Bucket in your local region. For instance, otherdomain.au.inbox could be a good bucket name to use. You need no Lambda rules involved.

Then your Linux instance for mydomain.au can run shell scripts every 15 minutes or so to manage the emails from the bucket, because your AWS account lets you access buckets regardless of your domain name.

In my case, I modified the email-in.sh and email-archive.sh scripts as follows.

The S3 bucket now contains these subdirectories:

.archive
.BAK
.inbox
.restore
.sent

I have the same setups for Dovecot directories, even though Dovecot is not installed.
So, /data/user/Maildir, /home/user, /home/user/Maildir as a soft link to /data/user/Maildir. That is, we have the same structure as in my Dovecot articles.

When an email is copied by MS Exchange to the otherdomain.com address, SES puts it in the nominated bucket. The shell script email-in.sh will do the archive base 64 version to .archive, if it were a dmarc email it would go to .dmarc, and the email goes as-is into .inbox, for me up to 90 days. I can use MS360 Explorer to retrieve emails immediately if need be from .inbox. The .archive files are an encrypted version with 365 day expiry.

When we run the email-archive.sh we as usual specify the bucket, user name we set up, and the base64 password, then retrieved files go into .restore, in my case for 14 days expiry.

The shell scripts are:

email-in.sh (base64 copy to .archive, and original to .inbox for 90 days or whatever lifecycle you set)

vi /home/ec2-user/email-in.sh

#!/bin/sh
# $1 e-mail bucket - e.g. fred.domain.au.inbox
# $2 user name - e.g. fred
# $3 user encryption password - always keep once in use - e.g. deputyD0g - avoid ! character unless \! escaped
# Subject Symbol - ¹ archive (retreived via email-out.sh archived emails)
# --- BEGIN LOOP ALL FILES ---
for i in `aws s3 ls s3://$1 | grep -v .archive | grep -v .dmarc | grep -v .sent | grep -v .BAK | grep -v .restore | grep -v .inbox | awk '{print $NF}'`
do
        # --- BEGIN TIMESTAMP ---
        # get timestamp for each file
        dat=""
        size=""
        time=""
        dat=`aws s3 ls s3://$1/$i|grep -v .archive | grep -v .dmarc | grep -v .sent | grep -v .BAK | grep -v .restore| grep -v .inbox |  awk '{print $1, $2, $3}'`
        time=`echo $dat | awk '{print $1, $2}'`
        size=`echo $dat | awk '{print $3}'`
        # --- END TIMESTAMP ---
# --- BEGIN PRELIMINARY ---
# Copy original file to /data/user(g.e. fred) so we can work with it
# Then encrypt/copy original file to .archive without alteration, except adding superscript "¹" to indicate it is archived
# When any email is processed it is checked to see it is not already from an archive so that the "¹" superscript is not added again
# This means we choose not to put an archive character into forwarded emails that were previosusly archived.
k=$i".enc"
o=$i".o"
aws s3 cp s3://$1/$i /data/$2/$i
aws s3 rm s3://$1/$i
# only process normally if not a dmarc report
m=""
m=`grep -i "Report Domain" /data/$2/$i`
if [ "$m" = "" ] ;
then
# --- END PRELIMINARY --- WHERE $i is what we work with after our archive section is finished
# --- BEGIN ARCHIVE SECTION --- 
cp -p /data/$2/$i /data/$2/$o
p=""
p=`grep 'Subject:' /data/$2/$o | grep -v "DKIM" | grep -v "Subject:Message-ID" | grep -v "Subject:Date:" |  grep -E -i '¹'`
if [ "$p" = "" ] ;
then
 # add the "¹" superscript as it is not present
 sed '0,/Subject: /{s//Subject: ¹ /}' /data/$2/$o | /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -out /data/$2/$k
 aws s3 cp /data/$2/$k s3://$1/.archive/$k
 # bucket does not permit us to change the file ownership in .archive
 rm /data/$2/$o /data/$2/$k
else
 # if superscript was previously added, just archive the file (even if from a newly forwarded email)
 /usr/bin/openssl aes-256-cbc -a -salt -pbkdf2 -k $3 -in /data/$2/$o -out /data/$2/$k
 chown $2 /data/$2/$k
 chgrp $2 /data/$2/$k
 aws s3 cp /data/$2/$k s3://$1/.archive/$k
 rm /data/$2/$o /data/$2/$k
fi
# --- END ARCHIVE SECTION --- 

# We do not test for tracking or base64 tracking content.
# --- BEGIN REGISTER --- append register.csv file - collation of all metadata
# We append the aws filename and date, all From: and Subject: strings
from=""
from=`grep "From: " /data/$2/$i|awk '{print $NF}'|sed 's/^M//g'`
subject=""
subject=`grep "Subject: " /data/$2/$i|sed 's/,/-/g'|awk -F: '{print $NF}'|sed 's/^M//g'`
to=""
to=`grep "To:[ ]" /data/$2/$i|sed 's/,/-/g'|awk -FTo: '{print $NF}'|sed 's/^M//g'`
echo $time , $i , $from, $to, $subject >> /data/$2/register-$2.csv
# --- END REGISTER ---
# --- Place email into the inbox ---
chown $2 /data/$2/$i
chgrp $2 /data/$2/$i
cp -p /data/$2/$i /data/$2/Maildir/new/$i
# modified for MS Exchange 90 days copies to .inbox and then use MS360 to download manually
aws s3 mv /data/$2/$i s3://$1/.inbox/$i.eml
# --- END LOOP ALL FILES ---
else
aws s3 mv /data/$2/$i s3://$1/.dmarc/$i.eml
fi
# end if it was a dmarc report
done
exit

[save and exit - but use the Unix method to replace the three ^M characters wth "control-v m" to get the special character.]

Notice the grep patterns must exclude the hidden folders in the bucket.

And, there is only one simple modification using .inbox:

cp -p /data/$2/$i /data/$2/Maildir/new/$i
# modified for MS Exchange 90 days copies to .inbox and then use MS360 to download manually
aws s3 mv /data/$2/$i s3://$1/.inbox/$i.eml

The email-archive.sh script is slightly modified to put the decrypted emails into .restore:

#!/bin/sh
echo " "
echo "-------------------------------------------------------------------------------------------------------------"
echo " "
echo "When entering your search string, . represents ALL files in the bucket, which can be a lot"
echo "A string of characters followed by . represents a search on those starting characters"
echo "For example, k. returns all files starting with k while k0. returns all files starting with k0"
echo "You may search groups, such as ci5. m2. and so on"
echo "Please do not use other wildcard characters for the search. Your S3 Bucket lists the .archive files you need"
echo "The user archive password is not necessarily the user password, but is the encryption password always used"
echo "If your encryption password has the ! character, escape it with the \ character - e.g. g00!fy needs g00\!fy"
echo "This program will first attempt to list the files you search for, and will then ask to proceed"
echo " "
echo "-------------------------------------------------------------------------------------------------------------"
echo ""
read -p "[Enter Bucket Name           ]: " bucket
read -p "[Enter User Name             ]: " user
read -p "[Enter User Archive Password ]: " password
read -p "[Enter Search String         ]: " search
 # for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$search`
  IFS=$'\n'
  for f in `echo $search|awk '{printf $1;printf "\n" $2;  printf "\n" $3; printf "\n" $4; printf "\n" $5; printf "\n" $6; printf "\n" $7;}'`
  do
  for i in `aws s3 ls s3://$bucket/.archive/ | grep ".enc"|sort -n|awk '{print $4,$1,$2,$3}'|grep ^$f`
   do
   echo $i
   done
  done
 answer=""
 read -p "[Proceed? (y for yes)       ]: " answer
 if [ "$answer" = "y" ] ;
 then
 # for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$search`
 for f in `echo $search|awk '{printf $1;printf "\n" $2;  printf "\n" $3; printf "\n" $4; printf "\n" $5; printf "\n" $6; printf "\n" $7;}'`
  do
  for i in `aws s3 ls s3://$bucket/.archive/ | awk '{print $NF}'| grep ".enc" | grep ^$f`
  do
 echo $i
 aws s3 cp s3://$bucket/.archive/$i /data/$user
 # modified for MS Exchange email copies only to the bucket
 /usr/bin/openssl aes-256-cbc -d -a -salt -pbkdf2 -k $password -in /data/$user/$i > /data/$user/Maildir/$i.eml
 # /usr/bin/openssl aes-256-cbc -d -a -salt -pbkdf2 -k $password -in /data/$user/$i > /data/$user/Maildir/new/$i.eml
 # modified for MS Exchage:
 aws s3 mv /data/$user/Maildir/$i.eml s3://$bucket/.restore/$i.eml
 rm /data/$user/$i
 done
 done
fi
exit

[save and exit]

MS Exchange setups can be a little confusing, but they are not hard.

First you register otherdomain.au as a remote domain.

Then you use:

Perhaps review:
learn.microsoft.com/en-us/microsoft-365/security/office-365-security/anti-malware-protection-about?view=o365-worldwide

jasoncoltrin.com/2021/12/20/how-to-fix-550-5-7-520-access-denied-your-organization-does-not-allow-external-forwarding/ that shows how to set the new rule to allow forwarding.
Also see:
learn.microsoft.com/en-us/exchange/recipients/user-mailboxes/email-forwarding?view=exchserver-2019
learn.microsoft.com/en-us/exchange/recipients-in-exchange-online/manage-user-mailboxes/configure-email-forwarding

Basically you go to Defender > Policies and Rules > Anti-Spam policies and create a new rule that uses default settings, with the new option called on-forwarding is enabled. You call the rule anything, such as “Custom Outbound Mail Forwarding”. This is what lets you forward emails to the remove domain, and then under the user where you add the forwarding address.

For remote domains help:
learn.microsoft.com/en-us/exchange/mail-flow-best-practices/remote-domains/manage-remote-domains

Then you go to your recipient User and under Home > MAIL > Mailboxes > Email Forwarding, add a forwarding address with the copy to the external site, and keep the original with MS Exchange so you get both emails.

You can also look for Defender to remove emails that have commonly used bad attachments.

I know I am not giving a lot of detail, but searching MS Exchange on these topics will show what to do.

You can also review your security score in Azure as another job to do, and make some changes to the recommendations by saying you use 3rd parties (implying Google authentication) and “planned”, which improves your score.

 

Here is the bad attachment screen:

photographybyshaw.au

The service is now complete. It is best to do testing over a period of time before going live.

Start typing and press Enter to search