Kompx.com or Compmiscellanea.com

CSS vertical alignment

Windows : Internet Explorer 8.0+, Firefox 1.0+, Google Chrome, Opera 4.0+, Safari 3.1+, SeaMonkey 1.0+ [ 1 ].

Linux : Firefox 1.0+, Google Chrome / Chromium, Opera 5.0+, SeaMonkey 1.0+ [ 2 ].

CSS vertical alignment of a block element containing text and images. It works for various combinations of inline and block elements. Example:

CSS vertical alignment

Image

CSS vertical alignment

HTML / XHTML. Code:

<div class="parent">

<div class="child">

<div class="childcontent">CSS vertical alignment</div>

<div class="childcontent"><img src="image.jpg" width="68" height="68" alt="Image" /></div>

<div class="childcontent">CSS vertical alignment</div>

</div>

</div>

CSS. Code:

.parent {position: relative; left: 0px; top: 0px; height: 200px; display: table;}

.child {position: relative; left: 0px; top: 0px; display: table-cell; vertical-align: middle;}

.childcontent {position: relative; left: 0px; top: 0px;}

Note: .parent and .childcontent may be floated left ("float: left;") or not, but .child must be without "float: left;" for this method of CSS vertical alignment to work.


[ 1 ]

As well as Netscape 6.01+, Mozilla 0.6+.

[ 2 ]

As well as Netscape 6.01+, Mozilla 0.6+.


Aliosque subditos et thema

 

Lightweight web browsers for Linux

 

Netsurf : Hv3 : Dillo : Links2 Nowadays the real lightweight web browsers are those without JavaScript and Flash support or with a very limited one. Because a web browser even with the lightest interface becomes heavyweight working with the modern internet crammed with scripts and multimedia. These browsers are not numerous and some of them are moving towards getting JavaScript support - i.e. towards dropping out of the "Lightweight web browsers" category. Lightweight web browsers may be more advanced - with CSS support. Or less - no CSS support or close to that. Netsurf - / home page / Currently the most advanced lightweight web browser for Linux. CSS support is quite solid. Good support of HTML. Feeble support for JavaScript - may be disabled by default. There is a version of Netsurf for *nix systems that can be run without X, using framebuffer of graphic adapter. Created initially for RISC OS and only later ported to Linux. There are also versions for other *nix systems, for AmigaOS, AmigaOS 4, Atari OS, BeOS/Haiku, Mac OS X, MorphOS. ( More about Netsurf features ) NetSurf 3.0 on PuppyLinux 5.2.8: netsurf-browser.org NetSurf 3.0 on PuppyLinux 5.2.8: w3schools.com/browsers/browsers_stats.asp NetSurf 3.0 on PuppyLinux 5.2.8: en.wikipedia.org/wiki/Netsurf NetSurf 3.0 on PuppyLinux 5.2.8: ebay.com NetSurf 3.0 on PuppyLinux 5.2.8: kompx.com/en/web-browsers-for-dos.htm NetSurf 3.0 on PuppyLinux 5.2.8: twitter.com Hv3 - / home page / Less advanced lightweight web browser for Linux, but still having considerable CSS support. Weak JavaScript / ECMAScript support. Quite good HTML support.

Lynx. Web data extraction

 

Aside from browsing / displaying web pages, Lynx can dump the formatted text of the content of a web document or its HTML source to standard output. And that then may be processed by means of some tools present in Linux, like gawk, Perl, sed, grep, etc. Some examples: Dealing with external links Count number of external links Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it, wc counts the number of links extracted and displays it: lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" | wc -l Find external links and save them to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it and saves them to a file: lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" > file.txt Find external links, omit duplicate entries and save the output to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http:", sends the result further again to grep that picks lines not starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (external links of the web page) out of it, sort sorts them and uniq deletes duplicate entries. The output is saved to a file: lynx -dump -listonly "elinks.htm" | grep -o "http:.*" | grep -E -v "http://compmiscellanea.com|http://www.compmiscellanea.com" | sort | uniq > file.txt Dealing with internal links Count number of internal links Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links), wc counts the number of links extracted and displays it: lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" | wc -l Find internal links and save them to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links) and saves them to a file: lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" > file.txt Find internal links, omit duplicate entries and save the output to a file Lynx sends list of links from the content of a web page to standard output. Grep looks only for lines starting with "http://compmiscellanea.com" and "http://www.compmiscellanea.com" (internal links), sort sorts them and uniq deletes duplicate entries. The output is saved to a file: lynx -dump -listonly "elinks.htm" | grep -E -o "http://compmiscellanea.com.*|http://www.compmiscellanea.com.*" | sort | uniq > file.txt The reason behind using "lynx -dump -listonly" instead of just "lynx -dump" is that there may be web pages with plain text strings looking like links (containing "http://" for instance) in the text of the content, as it is the case with http://www.kompx.com/en/elinks.htm page.