Find all the files that a specific user owns. (and ignoring remote mounts) Need to sudo or be root to do this. -xdev will keep the find to the local hard drive so it doesn't search remotely mounted drives.
find / -type f -user david -xdev > ~/david_files.txt
And, if you would like to see what files in this list are not in your home directory, try this:
cat ~/david_files.txt | grep -v '/home/david' # or you could use $HOME if you like in place of '/home/david' 8-)
I am comparing a local drive versus a checked out repository and these steps can validate they match. Using the find command will recursively dig down to find all the relevant files.
# create md5deep of current repository code rm ~/gui_src_md5deep.txt find lvisf-gui/ -iname '*.cpp' -exec md5deep -l {} \; >> ~/gui_src_md5deep.txt find lvisf-gui/ -iname '*.h' -exec md5deep -l {} \; >> ~/gui_src_md5deep.txt
# compare to what is on the system cd /home/lvis/work/lvisf-gui ln -s trunk lvisf-gui find lvisf-gui/ -iname '*.cpp' -exec md5deep -lX ~/gui_src_md5deep.txt {} \; find lvisf-gui/ -iname '*.h' -exec md5deep -lX ~/gui_src_md5deep.txt {} \;
If you are in need of moving a large amount of files & links from one drive (raid probably) to another. This would be one way to go about that.
# define our data directory export MYDATA='/space/data/' # total size of partition du -s $MYDATA 4823313734 data # directory count find $MYDATA -type f -iname '*' -print | wc -l 3632 # file count find $MYDATA -type d -iname '*' -print | wc -l 305 # symbolic link count find $MYDATA -type l -iname '*' -print | wc -l 3 # make a list of symbolic links you can diff later on (column 9 might be wrong for your OS, check) find $MYDATA -type l -iname '*' -print | xargs ls -l | sort -k 9 > ~/mybackup_symbolic_link_details.txt
time md5deep -o f -rlz $MYDATA | sort -k 3 > ~/mybackup_md5deep_all_files.txt
time rsync -vaP $MYDATA /to/your/destination
export NEWCOPY='/mnt/tmp/newraid/data' du -s $NEWCOPY find $NEWCOPY -type f -iname '*' -print | wc -l find $NEWCOPY -type d -iname '*' -print | wc -l find $NEWCOPY -type l -iname '*' -print | wc -l
find $NEWCOPY -type l -iname '*' -print | xargs ls -l | sort -k 9 > ~/mynewdrive_symbolic_link_details.txt # if this does not work... uh oh. diff ~/mybackup_symbolic_link_details.txt ~/mynewdrive_symbolic_link_details.txt
time find $NEWCOPY -type f | xargs md5deep -X ~/mybackup_md5deep_all_files.txt # piping to a file time find $NEWCOPY -type f | xargs md5deep -X ~/mybackup_md5deep_all_files.txt > ~/mynewdrive_md5deep_mismatches.txt # and the mismatch file size should be zero (-8 ls -l ~/mynewdrive_md5deep_mismatches.txt
Using md5sum or md5deep to create md5sum lists for each directory (as borrowed from here: http://www.linuxquestions.org/questions/linux-software-2/how-to-create-md5sum-for-a-directory-689242/)
find directory -type f -print0 | xargs -0 md5sum >> checksums.md5
md5sum -c /path/to/file.md5
OR
md5deep -rl directory > checksums.md5
Once you have a md5sum list, you can verify the files using md5deep (and this input list doesn't have to be in any order, just verifies the md5sum and file name):
# the -X shows you only files that do not pass, try -M if you want to see all the successes md5deep -X /PATH/TO/CHECKSUMS.md5 /DIR/TO/CHECK/*
dd is your best friend here. This page explains most anything you would want to do:
dd if=/dev/sdX1 of=/some/path/to/output/to.img bs=4096 conv=notrunc,noerror
NOTE: Be VERY CAREFUL that you're writing to the drive you want to write to. A simple mistake will wipe the entire wrong drive and is SO easy to do. if is IN FILE and of is OUT FILE
I added the time command to see how long it takes to do this drive mirror.
time dd if=/dev/sdX of=/dev/sdY bs=4096 conv=notrunc,noerror
If you have bad sectors and want to “pad” and bad data with zeros (where it cannot read data from the source disk), use the sync option: (block size ok? old disks were 512, I believe 4096 is correct currently (2012))
time dd if=/dev/sdX of=/dev/sdY bs=4096 conv=notrunc,noerror,sync
Updated today using lvis classic OS drive as an example — David Lloyd Rabine 2019/02/08 14:39
# compress the entire disk image into a compressed file: time sudo bash -c "dd if=/dev/sdc bs=512 conv=notrunc,noerror,sync | gzip --best > /tmp/lvisclassic_os_dd_bs512.img.gz" time sudo bash -c "dd if=/dev/sdd bs=4096 conv=notrunc,noerror,sync | gzip > /media/ubuntu/LVIS-EXT3-2/lvisclassic_os_dd_bs4096.img.gz" #32 GB 2021.07.13 time dd if=/dev/sdd conv=notrunc,noerror | pv -s 32017047552 | pigz --fast > /media/ubuntu/LVIS-EXT3-2/lvisclassic_os_dd_pigz_nobs.img.gz # md5deep check the value of the raw data time gunzip lvisclassic_os_dd_pigz_nobs.img.gz -c | pv -s 32017047552 | md5deep -z gunzip lvisclassic_os_dd_bs512.img.gz -c | md5deep -z mv gunzip lvisclassic_os_dd_bs512.img.gz lvisclassic_os_dd_bs512.img.20190208.a7c58c89e09befd7b6b871606cfb4821.gz 32017047552 a7c58c89e09befd7b6b871606cfb4821 # output back to a device gunzip lvisclassic_os_dd_bs512.img.20190208.a7c58c89e09befd7b6b871606cfb4821.gz -c | dd of=/tmp/diskimage.img bs=512 conv=notrunc,noerror,sync status=progress
Add a progress bar and use sudo
time sudo dd if=/dev/sdX of=/dev/sdY bs=4096 conv=notrunc,noerror status=progress
Latest attempt at mirroring Sylvia's laptop (2019.12.27) she got a new 1TB SSD
time sudo dd if=/dev/sdX of=/dev/sdY bs=64k conv=notrunc,noerror status=progress kubuntu@kubuntu:~$ time sudo dd if=/dev/sda of=/dev/sdc bs=64K conv=notrunc,noerror status=progress 1000203550720 bytes (1.0 TB, 932 GiB) copied, 11060 s, 90.4 MB/s 15261915+1 records in 15261915+1 records out 1000204886016 bytes (1.0 TB, 932 GiB) copied, 11062.5 s, 90.4 MB/s real 184m22.400s user 0m29.380s sys 33m49.240s
A machine at work had no CD and no floppy, but I needed a BIOS update. I used a linux machine to write a free DOS image to my 64MB USB stick and that allowed the machine to boot and left enough room for the flash image and updating software. Worked great:
##### WARNING: this formats the drive completely, everything on that drive WILL BE LOST ##### sudo dd if=FDSTD.288 of=/dev/sdxxx bs=512 # replace sdxxx with your machine's USB ##### (BE VERY CAREFUL which disk you are wiping out) ##### sync # doubt you need to do this, but why not...
If you have an ISO image to burn, this should work (provided your drive is a write-able one)
cdrecord dev=/dev/hdXX slackware-11.0-install-d1.iso
or something like this if you want to control the speed and see verbose messages
cdrecord -v speed=4 dev=/dev/hdXX slackware-13.0-install-d1.iso
Here is a good link for md5sum checking an ISO.
Trying to burn all these MST3k DAP-DVDs, I wanted to ensure that the burned disc was exactly the file. So, this little script should do it. (OK, it ended up being not so simple/little, but I think this is a solid solution. My only worry is if the cat wildcard, the files don't sort properly?)
verify_mst3k_dap_dvd.sh
#!/bin/sh # # verify_image: Input an mds file, and verify the image with the drive if [ -z "$1" ]; then echo echo "usage: $0 <FILE.mds> [dvd drive]" echo "dvd drive example: /dev/hdb" echo exit fi SRCMDS=$1 DVD=$2 if [ -z "$2" ]; then DVD=/dev/hdc fi echo "Verifying $SRCMDS matches with DVD in $DVD" FILEROOT=${SRCMDS%.mds} echo "File root = $FILEROOT" NOOFFILE=`ls "$FILEROOT.i"* | wc -l` FULLFILES=`expr $NOOFFILE - 1` # get the size of the files, but ignore any that are FULL (this could break if a disc was completely full?) SMALLBYTES=`du --bytes "$FILEROOT".i* | grep -v 734003200 | awk '{ print $1}'` if [ -z "$SMALLBYTES" ]; then # init the value with the default if we got nothing from the inverse grep SMALLBYTES=734003200 fi FULLBLOCKS=`expr $FULLFILES \* 358400 + $SMALLBYTES \/ 2048` echo "Running MD5sum of disk (block size=2048 count=$FULLBLOCKS)..." MD5DVD=`dd if=$DVD bs=2048 count=$FULLBLOCKS | nice -n 19 md5sum | awk '{ print substr($1,0,32) }'` echo "$MD5DVD is md5sum of $DVD" echo "Running MD5sum of files...." MD5FILES=`cat "$FILEROOT".i* | nice -n 19 md5sum | awk '{ print substr($1,0,32) }'` echo "$MD5FILES is md5sum of files" if [ "$MD5DVD" != "$MD5FILES" ]; then echo "ERROR: MD5sums DO NOT MATCH!?!" exit 1 else echo "SUCCESS: MD5sum of DVD matches files." exit 0 fi
I found this here, and will just replicate it for consolidation.
time sh -c "dd if=/dev/zero of=ddfile bs=8k count=2000000 && sync"
Here's a great trick if you want to replace all occurances in a table of something with something else. This is an example I did at work to update the directories in our Joomla Remository:
UPDATE mos_downloads_files SET filepath = replace(filepath,"/web/lvis/downloads/data_sets/LDS_1.02/","/data/www/lvis/downloads/data_sets/LDS_1.02/");
If you don't want to be that guy and eat up your entire network's bandwidth. You can limit the speed of a transfer say with rsync by including a bandwidth limit option.
Here is an example keeping the speed to 75 mbps (half my current speed) (examples are from https://www.cyberciti.biz/faq/how-to-set-keep-rsync-from-using-all-your-bandwidth-on-linux-unix/)
rsync --bwlimit=KBPS src dst rsync --bwlimit=KBPS [options] src dst rsync --bwlimit=KBPS [options] src user@dst rsync --bwlimit=KBPS [options] user@src /path/to/dir rsync --bwlimit=KBPS -a -P /path/to/dir/ user@server1.cyberciti.biz rsync --bwlimit=7500 -a -P /path/to/dir/ user@server1.cyberciti.biz # 7.5 MBytes/sec
This works on the mac much better than pure rsync (found here: https://unix.stackexchange.com/questions/30953/tar-rsync-untar-any-speed-benefit-over-just-rsync)
tar -C /src/dir -jcf - ./ | ssh user@server 'tar -C /dest/dir -jxf -' # # example (doesn't make the directory like rsync does) # time tar -C /local/stuff -jcf - ./ | ssh username@machinename 'tar -C /Volumes/blah/incoming/remote/stuff -jxf -' # # do not compress # time tar -C /local/stuff -cf - ./ | ssh username@machinename 'tar -C /Volumes/blah/incoming/remote/stuff -xf -'
Here is how you can pipe through tar through pigz (on this machine, I had to give the full path to pigz, just FYI)
# can also add --exclude="" on the first tar line to exclude a subdir cd /mnt/external/some/file/directory/ time tar -cf - filename.bin | pigz | ssh user@server "/usr/local/bin/pigz -d | tar xf - -C /destination/path/"
copy FROM a server back to yourself
bash-3.2$ ssh username@server "tar czf - --strip-components=PATHDEPTH /server/source/directory" | tar xzf - -C /your/local/dest/
You need to expand the time stamp difference for Windoze to not re-copy everything;
# always have vP for verbose and progress rsync --modify-window=5 -vPrltD NORMAL THING # NO LUCK for me like this, but adding this does only check file size rsync --size-only -vaP FROM/ TO/ # try this rsync --size-only -vPrltD # OR remove -t # doing this on windows backup drive: rsync --size-only -vPrlD
In order to access something through a machine, you could pipe http (or anything) over sshd with something like this:
ssh -C -L 8000:localhost:80 david@avalon.gsfc.nasa.gov
URL: http://localhost:8000
NOTE: Verified today this works from South Africa between my servers. — David Lloyd Rabine 2023/10/19 10:07
On the machine you want to get connected to: (-N to not get a prompt, just create the tunnel)
ssh -R 6333:localhost:22 username@youroutsideserver.com # or ssh -N -R 6333:localhost:22 username@youroutsideserver.com
log into that machine and then connect to your box with:
ssh -p 6333 localhost # or ssh -p 6333 userOnAboveMachine@localhost
Keeping a sshd tunnel open from a hotel connection to a trusted system. Richard suggested doing something like this:
Here is an example: lifeline.sh (run this on the system from the hotel to open up a tunnel to your trusted computer)
#!/bin/sh # set and keep up a tunnel to our trusted system while ( /bin/true ); do ssh -CX -R 5961:localhost:5901 -R 2232:localhost:22 trusted.net done
This setup then allows remote workers into our systems at the hotel to process data. Adjust the port numbers as necessary and run the code on any system someone temporarily gives you access
If you want to resize a bunch of files with linux, try this following command line I found on this webpage (uses the ImageMagick suite)
for i in *.jpg; do convert -resize 50% -quality 80 $i conv_$i ; done
This looks like a VERY flexible file system where you can add devices after the FS is created, so expansion is infinite without having to reformat, etc… Pretty neat. What happens if you remove a device and there isn't enough space left to hold what is on there!? I am pleasantly confused!
I'm setting up my little portable raid-0 drive as a linux software raid 1 so if a disk fails, it will just be rebuilt under linux.
A lot of this is coming directly from here and the man page for mdadm.
mdadm --create /dev/md0 --chunk=64 --level=1 --name=drabineraid --raid-devices=2 /dev/sda1 /dev/sdb1 # on chewy, i did this (since it already had a couple SCSI drives) mdadm --create /dev/md0 --chunk=64 --level=1 --name=drabineraid --raid-devices=2 /dev/sdd1 /dev/sde1
mdadm --assemble /dev/md0 /dev/sdd1 /dev/sde1 # or after you have a config file mdadm --assemble --scan
mkfs.ext3 -m 1 /dev/md0 -L rabineraid # only do a 1 percent reserve for the super user (this is my user space) and label as 'rabineraid'
To stop the software raid:
mdadm --stop /dev/md0
Put this in your /etc/rc.d/rc.local to automatically monitor your arrays and email yourself the results:
mdadm --daemonise --monitor --mail YOUR.EMAIL@YOUR.SERVER.COM --test /dev/md0
Here is an example of querying the status of the raid while it is building up:
[root@gauss ~]# mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Mon Jan 14 13:50:16 2008 Raid Level : raid1 Array Size : 488383936 (465.76 GiB 500.11 GB) Device Size : 488383936 (465.76 GiB 500.11 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jan 14 13:50:16 2008 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Rebuild Status : 0% complete UUID : 74b7c532:2225ed7e:4fa0d22e:5291ec50 Events : 0.1 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 65 1 active sync /dev/sde1
Finally moving into the 21st century and using git as our repository for a lidar project at work.
git config --global user.name "YOUR NAME HERE" git config --global user.email "YOUR.EMAIL@YOUR.EMAIL.HOST"
Not sure what this does, but Joey says “after/before you switch to the other branch”
git fetch
git clone git@aetd-git.gsfc.nasa.gov:joseph.e.gurganus/splice-fsw.git
git add .
git commit -m "Some comment probably a good idea"
git push
git add a git commit a -m "bugfix, in a" git add b git commit b -m "new feature, in b"
# to update the dev branch git pull origin dev # to update the master git pull origin master
git branch -ra
git checkout -b dev
git branch
git push # probably error
git push --set-upstream origin dev_dlr
cd <repo_name>
$ git branch -a You should see something similar to the following: * master <feature_branch> remotes/origin/<feature_branch> remotes/origin/master
$ git checkout <feature_branch> $ git pull
$ git branch
$ git branch * <feature_branch> master
If you want a list of which files were changed during a check-in (for instance… if that broke everything and you just want to go back to the revision before it). Use this command:
svn diff -r 167:168 --summarize
An example of using lbzip2 or pigz to multi-threaded decompress the tar file.
time tar tvf backup_work.tar.bz2 --use-compress-program=lbzip2 time tar tvf backup_file.tar.gz --use-compress-program="pigz --processes 8" > backup_file.tar.gz.ls.txt time tar cf - paths-to-archive | pigz > archive.tar.gz tar cf - paths-to-archive | pigz -9 -p 32 > archive.tar.gz
This is nice if you are just wanting to grab a file and not leave some directories laying around that you'll need to clean up after getting that one file you really want.
time tar --strip-components 3 -zxvf SOMEARCHIVE.tar.gz path1/path2/path3/SOMEFILE
How do you combine PDF files into one big file? I'm glad you asked! I found this very useful for pulling my FAA documentation together from over 200 separate PDF prints from my CAD program. I found the answer here
alias pdfmerge='gs -dBATCH -dNOPAUSE -q -sDEVICE=pdfwrite -sOutputFile=./output.pdf'
Then to combine the files, just run: (and you'll end up with an output.pdf where you are)
pdfmerge FILE1 FILE2 ... etc...
Found this tip here
mystring=/foo/fizzbuzz.bar echo basename: $(basename $mystring) echo basename + remove .bar: $(basename $mystring .bar) echo dirname: $(dirname $mystring)
If you have pesky control-M characters in your file, edit with vi and use:
:%s/^M//g
If you know a part of the name or a couple parts of the file, you can find it with the following examples.
Just copy and paste any of these commands, these all work
# say you are looking for word world (an old cartoon I captured over the air with the mythtv for the kids) # search for world find /pool/media/video -iname '*world*' # but you might get a lot of hits for world that is a popular name, so use grep to add word find /pool/media/video -iname '*world*' | grep -i 'word' # and just to add another key word as an example let us only show mpg files find /pool/media/video -iname '*world*' | grep -i 'word' | grep -i 'mpg' # and if you want to clear the screen first for easy reading, stack that command clear ; find /pool/media/video -iname '*world*' | grep -i 'word' | grep -i 'mpg'
Here are a few command line examples to show files by access time (always wanted a good way to do this) so you can see what the latest files added to a directory or drive are:
find /pool/media/video/hd -xdev -type f -exec stat --format=%Y%n {} \; | sort # make it look nicer (the time is "seconds since Jan 1 1970") (n = numberic sort) find /pool/media/video/hd -xdev -type f -exec stat --format="%Y %n" {} \; | sort -k 1,1n # pipe to less if you want to page up / page down have a look (and reverse to so newest at top) q to exit less find /pool/media/video/hd -xdev -type f -exec stat --format="%Y %n" {} \; | sort -k 1,1nr | less
Use newermt switch to find and copy files that are human readable times / dates from one place to another.
find /Volumes/6608 -newermt "2024-11-01 00:00:00" ! -newermt "2024-12-01 00:00:00" -exec cp -a "{}" . \;
If you want to sort (say a md5deep list) of files by their path instead of the md5sum itself, try this:
sort +1 -2 md5deeps.txt
If you want to show your directories by size (to help you figure out why you have very little hard drive space left), I like to do a limited find and then use du to see who the offending directory is.
Here I am trying to figure out why my windows partition is full:
sudo find /windows/ -maxdepth 1 -type d -exec du -hs {} \;
Or if you would like to sort by numeric size:
sudo find /windows/Users/david -maxdepth 1 -type d -exec du -s {} \; | sort -n -k 1
Sometimes this works faster than straight up rsync when you pipe the ssh copy with tar (Mac OS X back in the day)
tar -C /src/dir -jcf - ./ | ssh user@server 'tar -C /dest/dir -jxf -'
Here is how to start the team speak server on archon.
cd ~/teamspeak/tss2_rc2/ ./server_linux start
NOTE: you can replace start with either: start, stop or restart (all three should be valid options)
Say you want to mass change file extensions in a directory. I found this script here: http://www.franzone.com/2007/09/05/bash-script-to-change-file-extensions/
Note: I added sort just to do it in order.
Usage: change_extension.sh <DIRECTORY> <FROM> <TO> (and the dots get stripped so use them if you want (or not))
#!/bin/bash # OLDEXT=${2/#.} NEWEXT=${3/#.} find "${1}" -iname "*.${OLDEXT}" | sort | while read F do NEWFILE="${F/%${OLDEXT}/${NEWEXT}}" echo "mv \"${F}\" \"${NEWFILE}\"" mv -f "${F}" "${NEWFILE}" done
#! /bin/bash # # Changes every filename in working directory to all lowercase. # # Inspired by a script of John Dubois, # which was translated into into Bash by Chet Ramey, # and considerably simplified by Mendel Cooper, author of this document. for filename in * # Traverse all files in directory. do fname=`basename $filename` n=`echo $fname | tr A-Z a-z` # Change name to lowercase. if [ "$fname" != "$n" ] # Rename only files not already lowercase. then mv $fname $n fi done exit 0
Here are some examples that I found http://www.ufoot.org/more/blog/misc/howto/bashbasename for yanking out the file path etc directly in bash.
function basename() { local name="${1##*/}" echo "${name%$2}" } function dirname() { local dir="${1%${1##*/}}" "${dir:=./}" != "/" && dir="${dir%?}" echo "$dir" } # Two additional functions: # 1) namename prints the basename without extension # 2) ext prints extension of a file, including "." function namename() { local name=${1##*/} local name0="${name%.*}" echo "${name0:-$name}" } function ext() { local name=${1##*/} local name0="${name%.*}" local ext=${name0:+${name#$name0}} echo "${ext:-.}" }
Windows text files on UNIX / Linux have CONTROL-M (^M) characters all over the place. To remove them, us tr (Thanks to Michelle for showing me this one):
tr -d '\015' <filein> > <fileout>
While in the field I needed to share my wireless connection to a static IP linux machine.
sudo /sbin/iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE # Turn on IP forwarding echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
If you want a device to always come up as the same USBX device, you can set up a udev rule that sees the serial number of the device to assign the name.
Selecting a default window manager is trivial, just run the following (found this here):
xwmconfig
Here is a manual way of grabbing the listings from Schedules Direct assuming you have an account.
#!/bin/bash # # grab the data from schedules direct tv_grab_na_dd --dd-data $HOME/xmldata.xml # fill the database with our listing data mythfilldatabase --dd-file 1 -1 1 $HOME/xmldata.xml mythbot@mythtv:~$
And your configuration files is:
## using config filename $HOME/.xmltv/tv_grab_na_dd.conf ## ## our configuration file: ## which is: ~/.xmltv/tv_grab_na_dd.conf username: USER.NAME password: WHAT.IT.IS timeoffset: -0400 lineup: PC:21851 auto-config: add channel: 16-1 WBOCDT channel: 21-2 WBOCDT2 channel: 8 WRAVLP channel: 47-1 WMDTDT channel: 28-2 WCPBDT2 channel: 28-3 WCPBDT3 channel: 28-1 WCPBDT channel: 47-2 WMDTDT2 channel: 36 WPMCCA channel: 30 W30CI channel: 25 W25AA
To get the partition UUID, there are disk dev links that will illuminate you.
ls -l /dev/disk/by-uuid
NOTE: each “title” entry will be counted as an item in the menu, starting with 0 as the first.
I recently was installing Slackware from our laptop to a work machine, and managed to blow away the GRUB on our laptop in the process. Here is how I fixed the GRUB boot loader, using this page.
sudo fdisk -l
sudo /media/root
sudo grub-install --root-directory=/media/root /dev/sdx
With grub2 you only need to modify the file:
sudo vi /etc/default/grub
sudo update-grub
After upgrading from 10.04 to 11.04, grub2 got broken. Found the answer quickly here: http://www.webupd8.org/2010/05/fix-symbol-grubputs-not-found-when.html
High resolution monitor where icons and text are unreadable. Need to make a couple changes to the settings to get the icons, text and toolbar to be usefuly sized!
Replace Unity with KDE on Ubuntu 14.04 LTS found here
sudo apt-get install kubuntu-desktop
You need these packages in order to mount NFS shares on your system (to act as a client)
sudo apt-get install portmap nfs-common
Use the following to mount your drives, assuming you have the information in /etc/fstab
sudo mount -a -t nfs
In a pinch, if you want to setup a NFS server say with a boot disk, do the following:
sudo apt-get install nfs-kernel-server nfs-common portmap sudo dpkg-reconfigure portmap sudo /etc/init.d/portmap restart sudo nano /etc/exports sudo /etc/init.d/nfs-kernel-server restart
To mount a windows shared directory, I found this page and took directly from it adapting it for my ReadyNAS Raid: http://ubuntuforums.org/showthread.php?t=280473
sudo mkdir /media/rabine sudo smbmount //10.0.0.3/rabine /media/rabine -o username=MYUSERNAME,password=MYPASSWORD,uid=1000,mask=000
I got Cisco VPN working using this link. You just need to install the following packages for this to work:
sudo apt-get install network-manager-pptp network-manager-vpnc network-manager-openvpn
= Broadcom BCM4312 =
This is the wireless driver for our little Dell 1501 laptop. I guess they cannot ship the wireless firmware so you just have to run this one line, and it fixed everything. I had to rmmod and just put the module back into the kernel with modprobe and it came up after a little pause.
sudo apt-get install b43-fwcutter
sudo apt-get install flashplugin-installer sudo apt-get install flashplugin-nonfree
sudo apt-get install gnudatalanguage
sudo apt-get install gmt # high resolution coast lines (get them from NOAA) and just untar these and move to /usr/share/gmt/coast/ wget ftp://ibis.grdl.noaa.gov/pub/gmt/GSHHS_high.tar.bz2 wget ftp://ibis.grdl.noaa.gov/pub/gmt/GSHHS_full.tar.bz2
sudo apt-get install hexedit
sudo apt-get install jed
sudo apt-get install md5deep
* SciTe Editor for Gnome (not sure about this one yet.. )
sudo apt-get install scite
sudo apt-get install smplayer
sudo apt-get install subversion
sudo apt-get install vlc
This removes unused kernels with a single command line: (I found this on the third answer here: http://askubuntu.com/questions/590673/why-doesnt-ubuntu-remove-old-kernels-automatically )
sudo dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get -y purge
youtube-dl --extract-audio --audio-format=mp3
youtube-dl -f mkv
Sometimes I have noticed files do not happen in order when doing command line things. Using xargs and sort allows us to sort the input files so that the order is correct.
ls -v j080_202405* | sort | xargs cat > j080_overnight.jps
On the computer guys, they recommended two utilities/sites:
Where is all my drive space? Use WinDirStat to find out!
I have a dual boot system, and I wanted to remove the GRUB loader. Found this link, and in the comments I think the correct answer is to simply run this while in Windows 7 (note.. could not find this on my drive.. so it must be on the Windows 7 install disk.. which I don't think I have!?):
bootsect /nt60 c: /mbr
Here is a method for rate limiting your scp transfers in windows:
Say you want to run Ubuntu under Windoze. Try this link:
I wanted to remove ALL of cygwin from a windows 10 machine and was having access denied issues. This worked: https://www.adminlabs.com/how-to-remove-cygwin-permission-denied-problem
cd c:\ takeown /r /d y /f cygwin64
icacls cygwin64 /t /grant Everyone:F
rmdir /s /q cygwin64