What an intelligent, thoughtful individual.
I find it difficult to forgive 44 for failing to pardon this patriot and instead pursuing him until the end of his term.
If not you, then who?
Great discussion of journalism, it's place in society, and that anyone who witnesses wrongdoing has a moral obligation to humanity to communicate it loudly and in a full voice.
Randomosity
Monday, May 11, 2020
Tuesday, January 20, 2015
Mouse feet and teflon glides
Searching for mouse glides feet
[ptfe mouse tape]
CS Hyde Mouse Discs with One Side Clean Removal Adhesive, 0.002" Thick, 1/4" Diameter (150 pcs/roll)
http://www.hyperglide.net/?hg=mx_skates_1
caveat: 20 days shipping to US
US retailer for hyperglide:
http://www.frozencpu.com/products/3048/pad-100/Hyperglide_Mouse_Skates_MX-2_-_8_Skates_-_Logitech_MX_300_G1.html
http://www.slicksurf.com/slicksurf/home/
forum post with tape experiences:
http://forums.anandtech.com/showthread.php?t=234427
I do recommend:
Try Audible and Get Two Free Audiobooks
[ptfe mouse tape]
CS Hyde Mouse Discs with One Side Clean Removal Adhesive, 0.002" Thick, 1/4" Diameter (150 pcs/roll)
http://www.hyperglide.net/?hg=mx_skates_1
caveat: 20 days shipping to US
US retailer for hyperglide:
http://www.frozencpu.com/products/3048/pad-100/Hyperglide_Mouse_Skates_MX-2_-_8_Skates_-_Logitech_MX_300_G1.html
http://www.slicksurf.com/slicksurf/home/
forum post with tape experiences:
http://forums.anandtech.com/showthread.php?t=234427
I do recommend:
Try Audible and Get Two Free Audiobooks
Saturday, April 26, 2014
CLANG error
https://langui.sh/2014/03/10/wunused-command-line-argument-hard-error-in-future-is-a-harsh-mistress/
I do recommend:
Try Audible and Get Two Free Audiobooks
For now you can work around the issue using
export ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future"
I do recommend:
Try Audible and Get Two Free Audiobooks
Monday, February 24, 2014
there's nothing like a static location for SSH_AUTH_SOCK
Place in your .bash_profile, and your gnuscreen sessions will resume working when a new agent comes along (such as after log out).
#!/bin/bash
# If you're on a multi-user system you might consider choosing
# a more secure (permissions-wise) directory than just $HOME/.
# Though it is just a symlink, and the destination of the symlink should already be secure.
# If it's not already our canonical file, and it exists... make it canonical:
[[ "${SSH_AUTH_SOCK}" != $HOME/.ssh-agent-$USER-screen ]] || \
[[ -f ${SSH_AUTH_SOCK} ]] && \
ln -sf ${SSH_AUTH_SOCK} $HOME/.ssh-agent-$USER-screen
# Other end of the link:
export SSH_AUTH_SOCK_ORIG=${SSH_AUTH_SOCK}
export SSH_AUTH_SOCK="$HOME/.ssh-agent-$USER-screen"
#!/bin/bash
# If you're on a multi-user system you might consider choosing
# a more secure (permissions-wise) directory than just $HOME/.
# Though it is just a symlink, and the destination of the symlink should already be secure.
# If it's not already our canonical file, and it exists... make it canonical:
[[ "${SSH_AUTH_SOCK}" != $HOME/.ssh-agent-$USER-screen ]] || \
[[ -f ${SSH_AUTH_SOCK} ]] && \
ln -sf ${SSH_AUTH_SOCK} $HOME/.ssh-agent-$USER-screen
# Other end of the link:
export SSH_AUTH_SOCK_ORIG=${SSH_AUTH_SOCK}
export SSH_AUTH_SOCK="$HOME/.ssh-agent-$USER-screen"
Sunday, December 15, 2013
gphoto2 device (jl2005c) supported by brew, but not macports.
Even though both have the latest 2.5.2 version.
Jeilin JL2005C gphoto2 support via macports was broken.
brew wound up working.
conversion of the resulting .ppm files then required:
brew install netpbm
download and convert all command:
gphoto2 -P ; for i in *.ppm ; do pnmtojpeg ${i} > ${i/%ppm/jpg} ; done
Jeilin JL2005C gphoto2 support via macports was broken.
brew wound up working.
conversion of the resulting .ppm files then required:
brew install netpbm
download and convert all command:
gphoto2 -P ; for i in *.ppm ; do pnmtojpeg ${i} > ${i/%ppm/jpg} ; done
Tuesday, September 17, 2013
Cassandra cleanup script -- with percentage goal awareness.
#!/bin/bash
# Purpose:
# This script is run daily from cron before the compaction script.
# On the same days the compaction script triggers, this script should also.
# On triggered days, this script should trim old files to
# bring disk usage below 60%.
# Options accepted:
#
# $1 = directory to cleanup, defaults to "/mnt/cassandra/data/myschema"
DATADIR=${1:-/mnt/cassandra/data/myschema}
## ____ Config Variables ____
# Attempt to delete filesets one by one until disk Use% is less than:
goal_percentage_less_than=60
## ____ No user serviceable parts below this line ____
if [[ ! -d ${DATADIR} ]] ; then
echo "Argument: ${DATADIR} directory does not exist. Aborting"
exit 1
fi
# Day of week (0=Sunday)
declare -r -i todays_dow=$(date +%w)
readonly -a days_of_week=( Sunday Monday Tuesday Wednesday Thursday Friday Saturday )
usage () {
echo "$0 <no options allowed yet>";
}
# we are not the compaction script, but if we were:
#
# if [[ ! -e /var/log/cassandra ]] ; then
# mkdir /var/log/cassandra
# elif [[ ! -d /var/log/cassandra ]] ; then
# echo "/var/log/cassandra is not a directory."
# exit 1
# fi
# learn and parse ring
declare -a ring=( $(/usr/bin/cassandra-nodetool -h 127.0.0.1 -p 12352 ring \
| grep ^[0-9] | cut -f1 -d\ ) )
declare -i ringsize=${#ring[@]}
# sanity check
if [[ ringsize -lt 1 ]] ; then
echo "count of ring nodes was less than 1. aborting."
exit 2
fi
declare -r this_hosts_ip=$(host $(hostname) | grep 'has address' | awk '{print $NF}')
echo "My hostname,IP is: $(hostname),${this_hosts_ip}"
if [[ -z ${this_hosts_ip} ]] ; then
echo "unable to convert local hostname into an IP address. aborting."
exit 3
fi
declare -r this_hosts_ip_regex=$(echo ${this_hosts_ip} | sed 's/\./\\./g')
# Sanity check, am I a member of this ring?
if [[ ! ${ring[@]} =~ ${this_hosts_ip_regex} ]] ; then
echo "Couldn't find myself (${this_hosts_ip} in ring: ${ring[@]}. aborting"
exit 4
fi
# In a list of zero-indexed nodes, which one am I?
let my_index=unset
for i in ${!ring[@]} ; do
[[ ${ring[i]} =~ ${this_hosts_ip_regex} ]] && {
my_index=$i
break
}
done
# Sanity check, enforce that we found our index:
[[ ${my_index} == "unset" ]] && exit 5
my_day_of_week=$(( ${my_index} % 7 ))
# Check for the case where I am the last node, but my
# day of week is Sunday (same as first node's).
# Choose (3=Wednesday) to avoid adjacent (0=Sunday).
# old way: let "! (( ringsize - 1 ) % 7)" && my_day_of_week=3
if [[ ${my_index} -eq $(( ${ringsize} - 1 )) && ${my_day_of_week} -eq 0 ]] ; then
my_day_of_week=3
fi
echo "I will compact on ${days_of_week[$my_day_of_week]}s."
echo "Today is ${days_of_week[$todays_dow]}."
# DO NOT SUBMIT
# uncommment for production:
if [[ ${my_day_of_week} -ne ${todays_dow} ]] ; then
echo Not our lucky day.
exit 0
fi
echo "It's our lucky day. BEGIN ***cleaning up older files***"
# Clean up oldest filesets until disk is less than 60% full.
cd ${DATADIR}
declare -a -r DFSTATS=( $(df -kP ${DATADIR} | tail -1) )
echo df reported capacity=${DFSTATS[4]/[%]/}%
echo Calculated Capacity=$(( ( 100 * ${DFSTATS[2]} ) / ${DFSTATS[1]} ))%
declare -a filesizes=() filenames=()
function load_filedata {
local size name
while read size name ; do
filesizes+=(${size})
filenames+=(${name})
done < <(ls -1krst ${1})
}
function current_fileset_size_sum {
local -i total
for i in ${filesizes[@]} ; do
total+=$i
done
echo "$total"
return $total
}
# Get all fileset numbers and put them in an array.
filesets_numbers_by_time=( $(ls -1kst | egrep -v ^total | awk -F- '{print $2}' | sort -n | uniq) )
declare -r filesets_count=${#filesets_numbers_by_time[@]}
# Add each fileset's filesizes into one value.
# Note: this creates a sparse array.
for i in ${filesets_numbers_by_time[@]} ; do
filesizes=()
filenames=()
load_filedata "*-${i}-*"
filesets_sizes[$i]=$(current_fileset_size_sum)
# echo "set filesets_sizes[$i] to $(current_fileset_size_sum)"
done
function load_oldest_fileset_data {
load_filedata "*-${filesets_numbers_by_time[0]}-*"
return ${filesets_numbers_by_time[0]}
}
declare -i count_of_filesets_to_delete=0
declare -i expected_capacity=100 # Sane default
declare -i accumulated_deletes_in_kbytes=0
declare -a filesets_to_delete=()
# External variables modified by this function
# $expected_capacity
function current_expected_capacity {
local -i total_of_filesets=0
for i in ${filesets_to_delete[@]} ; do
total_of_filesets+=${filesets_sizes[$i]}
echo fileset # $i size is == ${filesets_sizes[$i]}
done
echo -n Calculated %-Capacity after removing filesets \"${filesets_to_delete[@]}\"= >/dev/stderr
expected_capacity=$(( ( 100 * ( ${DFSTATS[2]} - ${total_of_filesets} ) ) / ${DFSTATS[1]} ))
echo ${expected_capacity}% >/dev/stderr
return ${expected_capacity}
}
# External variables modified by this function
# $filesets_to_delete
# $filesets_numbers_by_time
# $count_of_filesets_to_delete
function add_oldest_fileset {
filesets_to_delete+=( ${filesets_numbers_by_time[0]} )
echo filesets_to_delete=${filesets_to_delete[@]}
# drop the oldest fileset from this list:
filesets_numbers_by_time=( ${filesets_numbers_by_time[@]:1} )
count_of_filesets_to_delete+=1
}
# sets initial value of $expected_capacity
current_expected_capacity
while [[ expected_capacity -gt ${goal_percentage_less_than} ]] ; do
add_oldest_fileset
current_expected_capacity
[[ count_of_filesets_to_delete -gt 3 ]] && {
echo "Planner thinks we need to delete more than 4 sets,"
echo "We might have a problem here... Aborting."
exit 6
}
done
# Check that we are not deleting more than half the filesets:
if [[ ${#filesets_to_delete[@]} -gt $(( filesets_count / 2 )) ]] ; then
echo -n "Plan is to delete too many filesets: "
echo "${#filesets_to_delete[@]} of ${filesets_count}"
echo Aborting
exit 7
fi
# do the deletes
echo If I was a real cleanup script, I would now delete:
for i in ${filesets_to_delete[@]} ; do
ls -l *-"${i}"-*
done
# Purpose:
# This script is run daily from cron before the compaction script.
# On the same days the compaction script triggers, this script should also.
# On triggered days, this script should trim old files to
# bring disk usage below 60%.
# Options accepted:
#
# $1 = directory to cleanup, defaults to "/mnt/cassandra/data/myschema"
DATADIR=${1:-/mnt/cassandra/data/myschema}
## ____ Config Variables ____
# Attempt to delete filesets one by one until disk Use% is less than:
goal_percentage_less_than=60
## ____ No user serviceable parts below this line ____
if [[ ! -d ${DATADIR} ]] ; then
echo "Argument: ${DATADIR} directory does not exist. Aborting"
exit 1
fi
# Day of week (0=Sunday)
declare -r -i todays_dow=$(date +%w)
readonly -a days_of_week=( Sunday Monday Tuesday Wednesday Thursday Friday Saturday )
usage () {
echo "$0 <no options allowed yet>";
}
# we are not the compaction script, but if we were:
#
# if [[ ! -e /var/log/cassandra ]] ; then
# mkdir /var/log/cassandra
# elif [[ ! -d /var/log/cassandra ]] ; then
# echo "/var/log/cassandra is not a directory."
# exit 1
# fi
# learn and parse ring
declare -a ring=( $(/usr/bin/cassandra-nodetool -h 127.0.0.1 -p 12352 ring \
| grep ^[0-9] | cut -f1 -d\ ) )
declare -i ringsize=${#ring[@]}
# sanity check
if [[ ringsize -lt 1 ]] ; then
echo "count of ring nodes was less than 1. aborting."
exit 2
fi
declare -r this_hosts_ip=$(host $(hostname) | grep 'has address' | awk '{print $NF}')
echo "My hostname,IP is: $(hostname),${this_hosts_ip}"
if [[ -z ${this_hosts_ip} ]] ; then
echo "unable to convert local hostname into an IP address. aborting."
exit 3
fi
declare -r this_hosts_ip_regex=$(echo ${this_hosts_ip} | sed 's/\./\\./g')
# Sanity check, am I a member of this ring?
if [[ ! ${ring[@]} =~ ${this_hosts_ip_regex} ]] ; then
echo "Couldn't find myself (${this_hosts_ip} in ring: ${ring[@]}. aborting"
exit 4
fi
# In a list of zero-indexed nodes, which one am I?
let my_index=unset
for i in ${!ring[@]} ; do
[[ ${ring[i]} =~ ${this_hosts_ip_regex} ]] && {
my_index=$i
break
}
done
# Sanity check, enforce that we found our index:
[[ ${my_index} == "unset" ]] && exit 5
my_day_of_week=$(( ${my_index} % 7 ))
# Check for the case where I am the last node, but my
# day of week is Sunday (same as first node's).
# Choose (3=Wednesday) to avoid adjacent (0=Sunday).
# old way: let "! (( ringsize - 1 ) % 7)" && my_day_of_week=3
if [[ ${my_index} -eq $(( ${ringsize} - 1 )) && ${my_day_of_week} -eq 0 ]] ; then
my_day_of_week=3
fi
echo "I will compact on ${days_of_week[$my_day_of_week]}s."
echo "Today is ${days_of_week[$todays_dow]}."
# DO NOT SUBMIT
# uncommment for production:
if [[ ${my_day_of_week} -ne ${todays_dow} ]] ; then
echo Not our lucky day.
exit 0
fi
echo "It's our lucky day. BEGIN ***cleaning up older files***"
# Clean up oldest filesets until disk is less than 60% full.
cd ${DATADIR}
declare -a -r DFSTATS=( $(df -kP ${DATADIR} | tail -1) )
echo df reported capacity=${DFSTATS[4]/[%]/}%
echo Calculated Capacity=$(( ( 100 * ${DFSTATS[2]} ) / ${DFSTATS[1]} ))%
declare -a filesizes=() filenames=()
function load_filedata {
local size name
while read size name ; do
filesizes+=(${size})
filenames+=(${name})
done < <(ls -1krst ${1})
}
function current_fileset_size_sum {
local -i total
for i in ${filesizes[@]} ; do
total+=$i
done
echo "$total"
return $total
}
# Get all fileset numbers and put them in an array.
filesets_numbers_by_time=( $(ls -1kst | egrep -v ^total | awk -F- '{print $2}' | sort -n | uniq) )
declare -r filesets_count=${#filesets_numbers_by_time[@]}
# Add each fileset's filesizes into one value.
# Note: this creates a sparse array.
for i in ${filesets_numbers_by_time[@]} ; do
filesizes=()
filenames=()
load_filedata "*-${i}-*"
filesets_sizes[$i]=$(current_fileset_size_sum)
# echo "set filesets_sizes[$i] to $(current_fileset_size_sum)"
done
function load_oldest_fileset_data {
load_filedata "*-${filesets_numbers_by_time[0]}-*"
return ${filesets_numbers_by_time[0]}
}
declare -i count_of_filesets_to_delete=0
declare -i expected_capacity=100 # Sane default
declare -i accumulated_deletes_in_kbytes=0
declare -a filesets_to_delete=()
# External variables modified by this function
# $expected_capacity
function current_expected_capacity {
local -i total_of_filesets=0
for i in ${filesets_to_delete[@]} ; do
total_of_filesets+=${filesets_sizes[$i]}
echo fileset # $i size is == ${filesets_sizes[$i]}
done
echo -n Calculated %-Capacity after removing filesets \"${filesets_to_delete[@]}\"= >/dev/stderr
expected_capacity=$(( ( 100 * ( ${DFSTATS[2]} - ${total_of_filesets} ) ) / ${DFSTATS[1]} ))
echo ${expected_capacity}% >/dev/stderr
return ${expected_capacity}
}
# External variables modified by this function
# $filesets_to_delete
# $filesets_numbers_by_time
# $count_of_filesets_to_delete
function add_oldest_fileset {
filesets_to_delete+=( ${filesets_numbers_by_time[0]} )
echo filesets_to_delete=${filesets_to_delete[@]}
# drop the oldest fileset from this list:
filesets_numbers_by_time=( ${filesets_numbers_by_time[@]:1} )
count_of_filesets_to_delete+=1
}
# sets initial value of $expected_capacity
current_expected_capacity
while [[ expected_capacity -gt ${goal_percentage_less_than} ]] ; do
add_oldest_fileset
current_expected_capacity
[[ count_of_filesets_to_delete -gt 3 ]] && {
echo "Planner thinks we need to delete more than 4 sets,"
echo "We might have a problem here... Aborting."
exit 6
}
done
# Check that we are not deleting more than half the filesets:
if [[ ${#filesets_to_delete[@]} -gt $(( filesets_count / 2 )) ]] ; then
echo -n "Plan is to delete too many filesets: "
echo "${#filesets_to_delete[@]} of ${filesets_count}"
echo Aborting
exit 7
fi
# do the deletes
echo If I was a real cleanup script, I would now delete:
for i in ${filesets_to_delete[@]} ; do
ls -l *-"${i}"-*
done
Friday, August 16, 2013
Creating a static readonly chroot environment using squashfs.
Goal was to create a hermetic readonly chroot environment, with no external dependencies.
The contents of the environment includes:
files associated with a (tested) checkout/revision of the files used by a nameserver(bind)
the named binary + required libraries
any required support files or devices (/dev/null)
External writeable directories may exist to allow logs and pid file:
/var/run/*.pid
/var/named/chroot/logs/
Package format is a squashfs filesystem.
Capture a successful start and end of named using strace and system call arguments:
(ldd /usr/sbin/named would also have worked here)
mkdir /tmp/tmpdir
cd /tmp/tmpdir
>&1 strace -f -e open,stat,chroot /bin/bash -c 'ulimit -S -c 0 >/dev/null 2>&1 ; /usr/sbin/named -u named -4 -c /etc/named.conf -t /var/named/chroot' | grep -v -e ENOENT -e EACCES > startup.syscalls &
sleep 2
killall named
Gather a list of filenames (excluding /proc /dev) before the chroot() syscall.
(This could also be derived from 'ldd /usr/sbin/named' output...)
awk -F\" '/chroot\(/ { nextfile; } {print $2} startup.syscalls | sort | uniq | grep -v -e /proc -e /dev -e /tmp/tmpdir
# Explanation of above command:
# syscalls before chroot() are at the system level, we only need these right now.
Output should be something like:
.
/etc/group
/etc/ld.so.cache
/etc/localtime
/etc/nsswitch.conf
/etc/passwd
/lib64/libattr.so.1
/lib64/libcap.so.2
/lib64/libcom_err.so.2
/lib64/libc.so.6
/lib64/libdl.so.2
/lib64/libgssapi_krb5.so.2
/lib64/libk5crypto.so.3
/lib64/libkeyutils.so.1
/lib64/libkrb5.so.3
/lib64/libkrb5support.so.0
/lib64/libm.so.6
/lib64/libnss_files.so.2
/lib64/libpthread.so.0
/lib64/libresolv.so.2
/lib64/libselinux.so.1
/lib64/libtinfo.so.5
/lib64/libz.so.1
/usr/lib64/gconv/gconv-modules.cache
/usr/lib64/libbind9.so.90
/usr/lib64/libcrypto.so.10
/usr/lib64/libdns.so.99
/usr/lib64/libisccc.so.90
/usr/lib64/libisccfg.so.90
/usr/lib64/libisc.so.95
/usr/lib64/liblwres.so.90
/usr/lib64/libxml2.so.2
/usr/lib64/tls
/usr/lib/locale/locale-archive
# Drop the blank and '.' entries at the start, and copy non-chroot items to an empty directory:
awk -F\" '/chroot\(/ { nextfile; } {print $2}' startup.open,stat,chroot | sort | uniq | grep -v -e /proc -e /dev -e /tmp/tmpdir | sed -e 1d -e 2d | cpio --make-directories --dereference -p /tmp/tmpdir/
194017 blocks
# Explanation of above:
# Because named is smart enough to open required system devices (/dev/null,/dev/log,/dev/random)
# before chroot()ing, /dev and /proc should NOT need to be mounted (or copied) into the chroot.
# Copy the chroot items:
( cd /var/named/chroot ; find . | cpio --make-directories -p /tmp/tmpdir/var/named/chroot )
( cd /etc/pki ; find . | cpio --make-directories -p /tmp/tmpdir/var/named/chroot/etc/pki )
find /tmp/tmpdir/var/named/chroot/ -type d -exec chmod o+rx '{}' \;
rm /tmp/tmpdir/var/named/chroot/var/named/logs/named.run
chmod 777 /tmp/tmpdir/var/named/chroot/var/named/logs
chmod o+r /tmp/tmpdir/var/named/chroot/etc/rndc.{conf,key}
# TODO: consider use of --force-gid named, --force-uid named
mkdir -p /tmp/tmpdir/usr/sbin
cp /usr/sbin/named /tmp/tmpdir/usr/sbin
for i in /dev/null /dev/random /lib64/ld-linux-x86-64.so.2
echo $i
done | cpio --dereference -p --make-directories /tmp/tmpdir/
rm startup.open,stat,chroot
# and build the filesystem
mksquashfs tmpdir named.squashfs -noappend -all-root
mount -o loop named.squashfs /mnt
mount --bind /tmp/named.logs /var/mnt/var/named/chroot/var/named/logs
chroot /mnt "/usr/sbin/named" -u named -4 -c /etc/named.conf -t /var/named/chroot
----
squashfs stacking
cd /tmp/bind-9.9.3-chroot
mksquashfs * /tmp/bind993.squashfs -all-root
cp /tmp/bind993.squashfs /tmp/stage2.squashfs
cd /var/named/
mksquashfs chroot /tmp/stage2.squashfs -keep-as-directory -all-root
----
Mount and run:
# mount -o loop /tmp/stage2.squashfs /mnt
# mount --bind /tmp/named.logs /mnt/named.chroot/var/named/logs
# chroot /mnt "/usr/sbin/named" -u named -4 -c /etc/named.conf -t /chroot
Resulting mashup:
# ls -l /mnt/
total 0
drwxr-xr-x 2 root root 41 Jul 26 03:00 dev
drwxr-xr-x 2 root root 99 Jul 26 03:17 etc
drwxr-xr-x 2 root root 410 Jul 26 02:51 lib64
drwxrwxrwx 5 root root 48 Jul 27 01:19 chroot
drwxr-xr-x 5 root root 51 Jul 26 02:36 usr
# ls -l /mnt/chroot
total 0
drwxr-x--- 2 root named 53 Jul 26 00:46 dev
drwxr-x--- 2 root named 201 Jul 26 03:34 etc
drwxr-x--- 3 root named 28 Jul 26 00:46 var
The contents of the environment includes:
files associated with a (tested) checkout/revision of the files used by a nameserver(bind)
the named binary + required libraries
any required support files or devices (/dev/null)
External writeable directories may exist to allow logs and pid file:
/var/run/*.pid
/var/named/chroot/logs/
Package format is a squashfs filesystem.
Capture a successful start and end of named using strace and system call arguments:
(ldd /usr/sbin/named would also have worked here)
mkdir /tmp/tmpdir
cd /tmp/tmpdir
>&1 strace -f -e open,stat,chroot /bin/bash -c 'ulimit -S -c 0 >/dev/null 2>&1 ; /usr/sbin/named -u named -4 -c /etc/named.conf -t /var/named/chroot' | grep -v -e ENOENT -e EACCES > startup.syscalls &
sleep 2
killall named
Gather a list of filenames (excluding /proc /dev) before the chroot() syscall.
(This could also be derived from 'ldd /usr/sbin/named' output...)
awk -F\" '/chroot\(/ { nextfile; } {print $2} startup.syscalls | sort | uniq | grep -v -e /proc -e /dev -e /tmp/tmpdir
# Explanation of above command:
# syscalls before chroot() are at the system level, we only need these right now.
Output should be something like:
.
/etc/group
/etc/ld.so.cache
/etc/localtime
/etc/nsswitch.conf
/etc/passwd
/lib64/libattr.so.1
/lib64/libcap.so.2
/lib64/libcom_err.so.2
/lib64/libc.so.6
/lib64/libdl.so.2
/lib64/libgssapi_krb5.so.2
/lib64/libk5crypto.so.3
/lib64/libkeyutils.so.1
/lib64/libkrb5.so.3
/lib64/libkrb5support.so.0
/lib64/libm.so.6
/lib64/libnss_files.so.2
/lib64/libpthread.so.0
/lib64/libresolv.so.2
/lib64/libselinux.so.1
/lib64/libtinfo.so.5
/lib64/libz.so.1
/usr/lib64/gconv/gconv-modules.cache
/usr/lib64/libbind9.so.90
/usr/lib64/libcrypto.so.10
/usr/lib64/libdns.so.99
/usr/lib64/libisccc.so.90
/usr/lib64/libisccfg.so.90
/usr/lib64/libisc.so.95
/usr/lib64/liblwres.so.90
/usr/lib64/libxml2.so.2
/usr/lib64/tls
/usr/lib/locale/locale-archive
# Drop the blank and '.' entries at the start, and copy non-chroot items to an empty directory:
awk -F\" '/chroot\(/ { nextfile; } {print $2}' startup.open,stat,chroot | sort | uniq | grep -v -e /proc -e /dev -e /tmp/tmpdir | sed -e 1d -e 2d | cpio --make-directories --dereference -p /tmp/tmpdir/
194017 blocks
# Explanation of above:
# Because named is smart enough to open required system devices (/dev/null,/dev/log,/dev/random)
# before chroot()ing, /dev and /proc should NOT need to be mounted (or copied) into the chroot.
# Copy the chroot items:
( cd /var/named/chroot ; find . | cpio --make-directories -p /tmp/tmpdir/var/named/chroot )
( cd /etc/pki ; find . | cpio --make-directories -p /tmp/tmpdir/var/named/chroot/etc/pki )
find /tmp/tmpdir/var/named/chroot/ -type d -exec chmod o+rx '{}' \;
rm /tmp/tmpdir/var/named/chroot/var/named/logs/named.run
chmod 777 /tmp/tmpdir/var/named/chroot/var/named/logs
chmod o+r /tmp/tmpdir/var/named/chroot/etc/rndc.{conf,key}
# TODO: consider use of --force-gid named, --force-uid named
mkdir -p /tmp/tmpdir/usr/sbin
cp /usr/sbin/named /tmp/tmpdir/usr/sbin
for i in /dev/null /dev/random /lib64/ld-linux-x86-64.so.2
echo $i
done | cpio --dereference -p --make-directories /tmp/tmpdir/
rm startup.open,stat,chroot
# and build the filesystem
mksquashfs tmpdir named.squashfs -noappend -all-root
mount -o loop named.squashfs /mnt
mount --bind /tmp/named.logs /var/mnt/var/named/chroot/var/named/logs
chroot /mnt "/usr/sbin/named" -u named -4 -c /etc/named.conf -t /var/named/chroot
----
squashfs stacking
cd /tmp/bind-9.9.3-chroot
mksquashfs * /tmp/bind993.squashfs -all-root
cp /tmp/bind993.squashfs /tmp/stage2.squashfs
cd /var/named/
mksquashfs chroot /tmp/stage2.squashfs -keep-as-directory -all-root
----
Mount and run:
# mount -o loop /tmp/stage2.squashfs /mnt
# mount --bind /tmp/named.logs /mnt/named.chroot/var/named/logs
# chroot /mnt "/usr/sbin/named" -u named -4 -c /etc/named.conf -t /chroot
Resulting mashup:
# ls -l /mnt/
total 0
drwxr-xr-x 2 root root 41 Jul 26 03:00 dev
drwxr-xr-x 2 root root 99 Jul 26 03:17 etc
drwxr-xr-x 2 root root 410 Jul 26 02:51 lib64
drwxrwxrwx 5 root root 48 Jul 27 01:19 chroot
drwxr-xr-x 5 root root 51 Jul 26 02:36 usr
# ls -l /mnt/chroot
total 0
drwxr-x--- 2 root named 53 Jul 26 00:46 dev
drwxr-x--- 2 root named 201 Jul 26 03:34 etc
drwxr-x--- 3 root named 28 Jul 26 00:46 var
Subscribe to:
Posts (Atom)
#RSFtalks with Edward Snowden
What an intelligent, thoughtful individual. I find it difficult to forgive 44 for failing to pardon this patriot and instead pursuing him ...
Other Popular Posts:
-
The TP-Link Archer AC1750 C7 runs OpenWRT and is a great router IMHO. You can also try the same hardware, rebranded by Amazon as the A7: ...
-
[original post written in 2011, see below for Updates] Click image to zoom in. Here's a diagram of how I configure my Mac to locally r...
-
http://stackoverflow.com/questions/5171354/android-adb-shell-am-broadcast-bad-component-name adb shell am broadcast -a android.intent.act...