Kompx.com or Compmiscellanea.com

Unzip multiple files. Linux

Operating systems : Linux

Unzip multiple zip files into one directory by Linux command line unzip. Contrary to possible expectations, "unzip *.zip" is not going to work, *.zip should be put into quotes:

unzip "*.zip"

There may be files with the same names in these archives. To avoid overwriting:

unzip -B "*.zip"

"Unzip -B" makes unzip to overwrite duplicates during extraction process, but saving a backup copy of each overwritten file. The names for these backup copy files are created by adding tilde ("~") at the end of the original names of the files. If a file extension is present, then "~" is added after it. If that is not enough, unique sequence number (up to 5 digits) is appended after the "~".

"Unzip -B" is not too practical. For example, since when the sequence number range for numbered backup files gets exhausted (99999, or 65535 for 16-bit systems), the backup file with the maximum sequence number is deleted and replaced by the new backup version without notice ( More on the subject ). The number of files in an archive may not be always known in advance or may be more than possible sequence number range, so "Unzip -B" is not a great choice. Renaming duplicate files by adding "~" at the end of their names, after the extension, is not too convenient either.

But another built-in option is even worse. If the "-B" modifier is not used, each time a file with same name as there already unpacked is being extracted, unzip asks "replace example.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename:". And each time "r" must be hit, then a new name has to be input. So some bash or another script solving the problem should probably be prepared and used instead.


Aliosque subditos et thema

 

Renaming folders in mc

 

Renaming a folder in mc / Midnight Commander: - Select a folder --> Shift - F6 --> Edit existing folder name into a new one --> Enter Select a folder Edit folder name The original way of renaming a folder - the one mc / Midnight Commander had before "Shift - F6" was implemented - is also still there: - Select a folder --> F6 --> Enter a new folder name --> Enter Select a folder Enter folder name And "Esc - 6" may be used instead of "F6": - Select a file --> Esc - 6 --> Enter a new folder name --> Enter Select a folder Enter folder name

Lynx. Web data extraction

 

Aside from browsing / displaying web pages, Lynx can dump the formatted text of the content of a web document or its HTML source to standard output. And that then may be processed by means of some tools present in Linux, like gawk, Perl, sed, grep, etc. Some examples: Dealing with external links Count number of external links Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it, wc counts the number of links extracted and displays it: lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" | wc -l Find external links and save them to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it and saves them to a file: lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" > file.txt Find external links, omit duplicate entries and save the output to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it, sort sorts them and uniq deletes duplicate entries. The output is saved to a file: lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" | sort | uniq > file.txt Dealing with internal links Count number of internal links Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links), wc counts the number of links extracted and displays it: lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" | wc -l Find internal links and save them to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links) and saves them to a file: lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" > file.txt Find internal links, omit duplicate entries and save the output to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links), sort sorts them and uniq deletes duplicate entries. The output is saved to a file: lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" | sort | uniq > file.txt The reason behind using "lynx -dump -listonly" instead of just "lynx -dump" is that there may be web pages with plain text strings looking like links (containing "http://" for instance) in the text of the content, as it is the case with http://www.kompx.com/en/elinks.htm page.