Kompx.com or Compmiscellanea.com

Lynx. Web data extraction

Operating systems : Linux

Aside from browsing / displaying web pages, Lynx can dump the formatted text of the content of a web document or its HTML source to standard output. And that then may be processed by means of some tools present in Linux, like gawk, Perl, sed, grep, etc. Some examples:

Dealing with external links

Count number of external links

Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it, wc counts the number of links extracted and displays it:

lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" | wc -l

Find external links and save them to a file

Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it and saves them to a file:

lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" > file.txt

Find external links, omit duplicate entries and save the output to a file

Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it, sort sorts them and uniq deletes duplicate entries. The output is saved to a file:

lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" | sort | uniq > file.txt

Dealing with internal links

Count number of internal links

Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links), wc counts the number of links extracted and displays it:

lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" | wc -l

Find internal links and save them to a file

Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links) and saves them to a file:

lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" > file.txt

Find internal links, omit duplicate entries and save the output to a file

Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links), sort sorts them and uniq deletes duplicate entries. The output is saved to a file:

lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" | sort | uniq > file.txt

The reason behind using "lynx -dump -listonly" instead of just "lynx -dump" is that there may be web pages with plain text strings looking like links (containing "http://" for instance) in the text of the content, as it is the case with http://www.kompx.com/en/elinks.htm page. "Lynx -dump" would send to output formatted text where real links and plain text links like strings would look just the same and grep would not be able to discern one from another. "Lynx -dump -listonly" gives only a list of links, so that there is no confusion with plain text links looking strings.


Aliosque subditos et thema

 

Pure CSS responsive square

 

Responsive CSS square. No JavaScript / jQuery. Example:   HTML / XHTML. Code: <div class="square">&nbsp;</div> CSS. Code: .square {width: 10%; height: 0; padding-bottom: 10%;} /* Extra CSS, just styling the look: */ .square {background: #fd0;} Width: 10% makes the .square div to be 10% of the parent element's width. Height: 0 eliminates any height the element may have, letting padding-bottom: 10% to make it exactly equal to the width. So each time the width of the parent container is changed, the element's size gets recalculated. Based on the concept - CSS grid with responsive square cells: Square cell 1 Square cell 2 Square cell 3 Square cell 4 Square cell 5 Square cell 6 Square cell 7 Square cell 8 [ 1 ] As well as Netscape 6.01+, Mozilla 0.6+. [ 2 ] As well as Netscape 6.01+, Mozilla 0.6+.

CSS vertical alignment

 

CSS vertical alignment of a block element containing text and images. It works for various combinations of inline and block elements. Example: CSS vertical alignment CSS vertical alignment HTML / XHTML. Code: <div class="parent"> <div class="child"> <div class="childcontent">CSS vertical alignment</div> <div class="childcontent"><img src="image.jpg" width="68" height="68" alt="Image" /></div> <div class="childcontent">CSS vertical alignment</div> </div> </div> CSS. Code: .parent {position: relative; left: 0px; top: 0px; height: 200px; display: table;} .child {position: relative; left: 0px; top: 0px; display: table-cell; vertical-align: middle;} .childcontent {position: relative; left: 0px; top: 0px;} Note: .parent and .childcontent may be floated left ("float: left;") or not, but .child must be without "float: left;" for this method of CSS vertical alignment to work. [ 1 ] As well as Netscape 6.01+, Mozilla 0.6+. [ 2 ] As well as Netscape 6.01+, Mozilla 0.6+.