You always search facebook , myspace , youtube , yahoo ,gmail using the web browser…..But….do you know how they work?
As you already know, to view a web site or a web page in a browser, you would either type in the URL or click on a link (as one in your Favorites/Bookmarks) and hit the Go button. Now, the page you have asked the browser to display would probably be located on a Server computer far far away. The web browser program sends a request (Could I have the web page please?) to a web server program running on the remote computer. Newbies may be baffled here due to similarity of names… the computer on which the web site is stored is called a Server (uppercase) and it runs a program (confusingly) also called a server (all small case). Purists try to differentiate the computer (hardware) and program (software) by having the first letter in capitals (as in Server) but this is not a rule. Anyway, the server program, gathers the request from the web browser, tries to hunt for the web page and then formulates a response. This response will differ depending on whether the server program was able to find the requested web page or web file. Assuming, the server was able to locate the web page, it sends the HTML file to the web browser. The browser picks up all the information coming in from the server and does its best to display the web page. A typical web page not only has text but also images and these are separate files that need to be transferred from the server to the browser. So the browser-server communication goes on till all the files have been transferred to the browser. Once the files arrive at your computer, the browser-server connection is severed - cut - chopped! If you now click on any link on this web page or even refresh the page, the process starts all over again. FYI, this is called the client-server architecture.
One important point remains - let us say you are looking at two different web pages of the same web site (such as ebay auctions) from two computers simultaneously using the same internet connection. How does the ebay server program know which page (and its associated images and other files) to send to which computer? The answer to this seemingly complicated question lies in the request and response headers sent by the browser and server, respectively. So each request and response has a header that contains details such as the computer name (actually the I.P. address) so everything is in tune.
Finally, the information over the web is transferred using a set of rules called the HyperText Transfer Protocol (HTTP).
There are sooooooo many anti viruses software in the market…. some are effective others are not :( But how do they work
Virus Dictionary Approach
- As new virus and malicious threats are discovered, they are added to a virus dictionary. The virus, its behavior, the threat that it causes, and the author are the different type of information that is held in the dictionary. Some anti-virus programs uses this dictionary as a guide to identify any suspicious and threatening software or files.
Once a file is created, opened, emailed or downloaded to a computer, the anti-virus software checks it against the dictionary. If it is deemed a possible threat, the software will delete it, quarantine it (to stop it from spreading to other non-infected files) or repair it by removing any of the malicious code.
To stay up-to-date with any new viruses, the anti-virus software must regularly download updates to its dictionary. When new viruses appear onthe internet, users are encouraged to send the infected files to the makers of the anti-virus software so that the main virus dictionary can be updated.
The dictionary approach has been deemed quite effective. However, it is not without its pitfalls. Hackers and virus creators have found a way around it by developing polymorphic viruses. This means that they encrypt part of the malicious software as a form of disguise so that the anti-virus software will not recognize it as a virus.
Suspicious Behavior Approach
- The suspicious behavior approach monitors the behavior of all software programs running on the computer. This approach doesn’t identify or look for known viruses. It flags suspicious behavior such as a program trying to write to an executable program. The suspected program is flagged and a warning message is issued to the user for their plan of action for the program.
The suspicious behavior approach is more effective in stopping new viruses since it doesn’t rely on a dictionary, which may not be regularly updated, for reference. However, this approach can be annoying because of all of the false positives that it gives. After a while a user can become desensitized with the overwhelming amount of false warnings and inadvertently let a virus through. For this reason, the amount of anti-virus software that uses this approach is become almost nonexistent.
There isn’t any 100 percent effective way to guard the user against virus attacks. Some believe that Microsoft can do more with their security fixes in the popular programs such as Outlook and Windows. Anti-virus software and user carefulness are the best form of protection that is out there now.
They are popular maybe because…..MAYBE……its very popular and very known to the world….
Here are the TOP 5 most popular
Blaster computer worm
This is a computer worm that was developed to begin a SYN Flood against port 80 of windowsupdate.com. The worm intended to launch a Distributed Denial of Service attack against the website. After the worm has identified an Internet connection, it led to system instability and then the user could see the message that appeared for 60 seconds saying:
"I just want to say LOVE YOU SAN!!"
"billy gates why do you make this possible? Stop making money
and fix your software!!”
ILOVEYOU computer virus
Surely everyone has head of this computer virus. ILOVEYOU is believed to be one of the most dangerous malware among viruses, worms and Trojans in history, being developed by VBScript. For the first time the worm was identified in Philippines on May 4, 2000 and in a rather short period of time (basically one day) it managed to spread throughout the globe, damaging 10 percent of all Internet users and leading to a $5.5 billion loss. Everything started when an email message arrived in the user’s inbox with the subject “ILOVEYOU”. The message featured an attachment called LOVE-LETTER-FOR-YOU.TXT.vbs. After penetrating the system, the worm used to substitute all files with a copy of itself attached to each file on the computer. Besides, ILOVEYOU worm sent a copy of itself to all users on victim’s contact list. This is the reason why the worm managed to spread extremely fast. Just like in previous case, the affected users were those who use Windows operating system.
Sobig computer worm
For the first time this computer worm was identified in August 2003. Just like all powerful viruses and worms this one affected millions of Internet users who work on Windows. It was created using Microsoft Visual C++ complier and compressed with the help of tElock. It was both a worm (because it used to replicate itself) and a Trojan Horse. Up till now the developer of this computer worm and Trojan Horse was not identified.
Virut computer virus
The goal of this computer virus is to infect executable portable files, including .exe and .scr. Every time it spread, the virus used Polymorphism in order to avoid being identified. After infecting a computer, the virus opens a backdoor, thus being able to connect to an IRC server. Using the backdoor, the hacker was able to download extra malware to the victim’s machine. The Virut computer virus managed to shut down computers from the Texas court system.
Conficker computer virus
Spotted for the first time in November 2008, this computer virus managed to affect a huge number of computers. Conficker represents a worm that can replicate itself and spread very quickly mainly via a buffer overflow weakness in the Server Service on machines that run on Windows operating system. This computer worm can disable such Windows services as Windows Update, Windows Defender, Windows Security Center and Windows Error reporting. Today almost all antiviruses are able to detect this worm. The main countries affected by Conficker include China, Argentina, Brazil, Russia, and India.
The best way to keep your computer virus-free is to install effective Antivirus and Internet Security and always update it. Also be cautious in opening the sites you dont know
Computer viruses have a relatively short history, but the damages caused by some of them pushed computer-experts to opening a new page or record on computer viruses. Some viruses led to serious damages and affected a large number of companies or big companies, universities and even government agencies.
Here are some of the most dangerous computer viruses/worms in history:
Jerusalem - 1987
Also known as “Friday the 13th Virus”. This is one of the first MS-DOS viruses in history that caused enormous destructions, affecting many countries, universities and companies worldwide. On Friday 13, 1988 the computer virus managed to infect a number of institutions in Europe, America and the Middle East. The name was given to the virus after one of the first places that got “acquainted” with it - the Jerusalem University.
Along with a number of other computer viruses, including "Cascade", "Stoned" and "Vienna" the Jerusalem virus managed to infect thousands of computers and still remain unnoticed. Back then the anti-virus programs were not as advanced as they are today and a lot of computer users had little knowledge or did not know the existence of computer viruses.
Morris (a.k.a. Internet Worm) - November 1988
Origin : Cornell University / MIT USA
This computer virus infected over 6,000 computer systems in the United States, including the famous NASA research Institute, which for some time remained completely paralyzed. Due to erratic code, the worm managed to send millions of copies of itself to different network computers, being able to entirely paralyze all network resources. The damages caused by the Morris computer virus were estimated at $96 millions.
To be able to spread, the computer virus used errors in such operating systems as Unix for VAX and Sun Microsystems. The virus could also pick user passwords.
Solar Sunrise - 1998
A decade later the situation didn’t change, in fact it even got worse. Using a computer virus, hackers, in 1998, penetrated and took control of over 500 computer systems that belonged to the army, government and private sector of the United States. The whole situation was dubbed Solar Sunrise after the popular vulnerabilities in computers that run on the operating system called Sun Solaris. Initially it was believed that the attacks were planed by the operatives in Iraq. It was later revealed that the incidents represented the work of two American teenagers from California. After the attacks, the Defense Department took drastic actions to prevent future incidents of this kind.
Melissa - 1999
Origin : Aberdeen, New Jersey USA
For the first time computers got acknowledged with Melissa computer virus on March 26, 1999, when the virus shut down the Internet mail system, which got blocked with e-mails infected by the worm. It is worth mentioning that at first Melissa was not meant to cause any harm, but after overloading the servers, it led to serious problems. For the first time it spread in the Usenet discussion group alt.sex. Melissa was hidden within a file called "List.DiC", which featured passwords that served as keys to unlocking 80 pornographic websites. The original form of the virus was sent through e-mail to different users.
Melissa computer virus was developed by David L. Smith in Aberdeen Township, New Jersey. Its name comes from a lap dancer that the programmer got acknowledged with while in Florida. After being caught, the creator of the virus was sentenced to 20 months in federal prison and ordered to pay a fine of $5,000. The arrest was made by a team of representatives from FBI, New Jersey State Police and Monmouth Internet.
Melissa had the ability to multiply on Microsoft Word 97 and Word 2000, as well as on Microsoft Excel 97, 2000 and 2003. In addition, the virus had the ability to mass-mail itself from Microsoft Outlook 97 and Outlook 98.
I Love You - May 2000
Origin : Manila, Philippines
Using a similar method as the Melissa, the computer virus dubbed "I Love You" managed to infect millions of computers around the world overnight. Just like Melissa this computer virus sent passwords and usernames, which were stored on the attacked computers, back to the developer of the virus. After authorities traced the virus they found that a young Filipino student was behind the attack. The young man was released due to the fact that the Philippines did not have any law that would prevent hacking and spreading malware. This situation served as one of the premises for creating the European Union’s global Cybercrime Treaty.
The Code Red worm - July 2001
This 21st century computer virus managed to penetrate tens of thousands of systems that ran Microsoft Windows NT and Windows 2000 server software. The damages caused by the Code Red computer virus were estimated at $2 billion. Core Red was developed to use the power of all computers it infected against the official website of the White House at a predetermined date. In collaboration with different virus hunters and tech firms, the White House managed to decipher the code of the Code Red virus and stop traffic as the malware started its attacks.
The Code Red II worm -August 2001
Origin : Makati City, Philippines
Two weeks after Code Red, Code Red II worm is released. Although similar in behavior to the Code Red worm, analysis showed it to be a new worm instead of a variant. The worm was designed to exploit a security hole in the indexing software included as part of Microsoft’s Internet Information Server (IIS) web server software. It is pseudo-randomly chose targets on the same or different subnets as the infected machines according to a fixed probability distribution, favoring targets on its own subnet more often than not, and it used the pattern XXXXXXXX… instead of NNNNNNNN…
Nimda - 2001
Origin : China
Shortly after the September 11 tragedy this computer virus infected hundreds of thousands of computers worldwide. Nimda was considered to be one of the most complicated viruses, having 5 different methods of infecting computers systems and being able to duplicate itself.
Downadup - 2009
The latest and most dangerous virus is the "downadup" worm, which was also called "Conficker". The computer security company F-Secure stated that the computer virus has infected 3.5 million computers worldwide. This malicious program was able to spread using a patched Windows flaw. Downadup was so “successful” in spreading across the Web, because it used a flaw that Microsoft patched in October in order to distantly compromise computers that ran unpatched versions of Microsoft’s operating system. But the greatest power of the worm is believed to be the ability of computers, infected with the worm, to download destructive code from a random drop point. F-Secure stated that three of the most affected countries were China, Brazil and Russia.
Major Web Companies like Google, Facebook, Yahoo!, Akamai and Limelight Networks and etc, will offer their content over IPv6 for a 24-hour “test flight” on June 8, 2011.
According to Google from their official blog,
On World IPv6 Day, we’ll be taking the next big step. Together with major web companies such as Facebook and Yahoo!, we will enable IPv6 on our main websites for 24 hours. This is a crucial phase in the transition, because while IPv6 is widely deployed in many networks, it’s never been used at such a large scale before. We hope that by working together with a common focus, we can help the industry prepare for the new protocol, find and resolve any unexpected issues, and pave the way for global deployment.
Google also stated that Internet users don’t need to do anything special to prepare for World IPv6 Day. Their current measurements suggest that the vast majority (99.95%) of users will be unaffected.
What is the difference between IPv4 and IPv6?
Internet Protocol Version 4 (IPv4)
The Internet Protocol Version 4 or also known as IPv4 is the widely or commonly used internet protocol version around the world.
IPv4 is a connectionless protocol for use on packet-switched Link Layer networks (e.g., Ethernet). It operates on a best effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol (e.g., Transmission Control Protocol).
IPv4 uses 32-bit (four-byte) addresses, which limits the address space to 4,294,967,296 (232) possible unique addresses. However, some are reserved for special purposes such as private networks (~18 million addresses) or multicast addresses (~270 million addresses). This reduces the number of addresses that can potentially be allocated for routing on the public Internet. As addresses are being incrementally delegated to end users, an IPv4 address shortage has been developing. Network addressing architecture redesign via classful network design, Classless Inter-Domain Routing, and network address translation (NAT) have contributed to delay significantly the inevitable exhaustion; but on February 3, 2011, IANA’s primary address pool was exhausted when the last 5 blocks were allocated to the 5 regional Internet registries (RIRs).
This limitation has stimulated the development of IPv6, which is currently in the early stages of deployment, and is the only long-term solution.
IPv4 addresses may simply be written in any notation expressing a 32-bit integer value, but for human convenience, they are most often written in dot-decimal notation, which consists of the four octets of the address expressed separately in decimal and separated by periods.
Internet Protocol Version 6 (IPv6)
The Internet Protocol Version 6 or also known as IPv6, is a version of the Internet Protocol (IP) that is designed to succeed Internet Protocol version 4 (IPv4) and the only long-term solution of IPv4 address shortage or exhaustion. It was developed by the Internet Engineering Task Force (IETF). Like IPv4, IPv6 is an Internet Layer protocol for packet-switched internetworking and provides end-to-end datagram transmission across multiple IP networks. It uses 128-bit addresses, so the new address space supports 2128(approximately 340 undecillion or 3.4×1038) addresses.
This expansion allows for many more devices and users on the internet as well as extra flexibility in allocating addresses and efficiency for routing traffic. It also eliminates the primary need for network address translation (NAT), which gained widespread deployment as an effort to alleviate IPv4 address exhaustion.
It also implements additional features not present in IPv4. It simplifies aspects of address assignment (stateless address autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet connectivity providers. The IPv6 subnet size has been standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from link layer media addressing information (MAC address). Network security is also integrated into the design of the IPv6 architecture, and the IPv6 specification mandates support for Internet Protocol Security (IPsec) as a fundamental interoperability requirement.It will also expand the capabilities of the Internet and enable a variety of valuable and exciting scenarios, including peer-to-peer and mobile apps.
IPv6 addresses have two logical parts: a 64-bit network prefix, and a 64-bit host address part. (The host address is often automatically generated from the interface MAC address.) An IPv6 address is represented by 8 groups of 16-bit hexadecimal values separated by colons (:) shown as follows:
A typical example of an IPv6 address is
The hexadecimal digits are case-insensitive.
The 128-bit IPv6 address can be abbreviated with the following rules:
- Rule one: Leading zeroes within a 16-bit value may be omitted. For example, the address
fe80:0000:0000:0000:0202:b3ff:fe1e:8329 may be written as
- Rule two: A single occurrence of consecutive groups of zeroes within an address may be replaced by a double colon. For example,
What is Internet Protocol?
The Internet Protocol is the principal communications protocol used in exchanging data packets across an internetwork using the Internet Protocol Suite.
Responsible in routing packets across network boundaries. And it is also the primary protocol that establishes the Internet.
IP is the primary protocol in the Internet Layer of the Internet Protocol Suite and has the task of delivering datagrams from the source host to the destination host solely based on their address also known as IP Address. For this purpose, IP defines addressing methods and structures for datagram encapsulation or packet encapsulation.
Historically, IP was the connectionless datagram service in the original Transmission Control Program introduced by Vint Cerf and Bob Kahn in 1974, the other being the connection-oriented Transmission Control Protocol (TCP). The Internet Protocol Suite is therefore often referred to as TCP/IP.
The first major version of IP, now referred to as Internet Protocol Version 4 (IPv4) is the widely used protocol of the Internet, although the successor, Internet Protocol Version 6 (IPv6) is in active, growing deployment worldwide.
Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face and fame until the early 1990s.
On 6 August 1991, CERN (Conseil Européen pour la Recherche Nucléaire ), a pan-European organization for particle research, publicized the new World Wide Web project. The Web was invented by British scientist Sir Tim Berners-Lee in 1989.
Sir Tim Berners-Lee created a browser-editor with the goal of developing a tool to make the Web a creative space to share and edit information and build a common hypertext.
FIRST WEB BROWSER
The first web browser was developed by National Center for Supercomputing Applications at the University of Illinois and they named it Mosaic. The screenshot of the first browser is shown below.
credits to : NCSA Image Archive / NCSA /University of Illinois.
FIRST WEB SERVER
The first Web Server in the world was the Next Computer that was used by Sir Tim Berners-Lee in developing and editing the first HTML Documents.
credits to : en.wikepedia.org / User:Aavindraa / Under GNU Public License V.1.2
Before 1990’s where Sir Tim Berners-Lee invented the Interactive Internet, Hypertext Mark-up Language (HTML) and the World Wide Web (WWW), there is already internet…
The early Internet was used by computer experts, engineers, scientists, and librarians. There was nothing friendly about it. There were no home or office personal computers in those days, and anyone who used it, whether a computer professional or an engineer or scientist or librarian, had to learn to use a very complex system.
The Internet was a result in visionary thinking of people in 1960’s who saw great potential value in allowing computers to share information on research and development in scientific and military fields.
Joseph Carl Robnett Licklider of Massachusetts Institute of Technology (MIT),first proposed the global network of computers in 1962.He moved out to Defense Advanced Research Projects Agency (DARPA) in late 1962 to head the work to develop it.
Leonard Klienrock of MIT and ULCA later developed the theory of packet switching, which is the basis of Internet connections.
Later in 1965, Lawrence Roberts of MIT connected a Massachusetts computer with a California computer through dial-up lines. It showed the effect of wide area networking (WAN), but it also showed that the telephone line is inadequate.
One year after the connection of dial-up lines, Kleinrock’s packet switching theory was confirmed. Roberts move over to DARPA and developed his plan for ARPANET.
The Internet, then known as ARPANET, was brought online in 1969 under a contract let by the renamed Advanced Research Projects Agency (ARPA).
E-mail was adapted for ARPANET by Ray Tomlinson of BBN in 1972. He picked the @ symbol from the available symbols on his teletype to link the username and address.
The telnet protocol, enabling logging on to a remote computer, was published as a Request for Comments (RFC) in 1972. RFC’s are a means of sharing developmental work throughout community or forums in our years…
The ftp (file transfer) protocol, enabling file transfers between Internet sites, was published as an RFC in 1973, and from then on RFC’s were available electronically to anyone who had use of the ftp protocol.
The Internet matured in the 70’s as a result of the TCP/IP architecture first proposed by Bob Kahn.
The Unix to Unix Copy Protocol (UUCP) was invented in 1978 at Bell Labs. Usenet was started in 1979 based on UUCP.
Similarly, BITNET (Because It’s Time Network) connected IBM mainframes around the educational community and the world to provide mail services beginning in 1981.
In 1986, the National Science Foundation funded NSFNet as a cross country 56 Kbps backbone for the Internet.
As the commands for E-mail, Telnet, and FTP were standardized, it became a lot easier for non-technical people to learn to use the nets.
Well, its very hard to choose which monitor will you choose.
But here are some tips that might help in your decision of which monitor will you choose.
CRT (Cathode Ray Tube) monitors is mostly used in graphic designers, studio centers, medical equipments and to some companies that needs color advantages.
Some advantages of CRT monitors:
- Multisync Capable
- High Refresh Rates
- Color Clarity and Depth
- Affordable (about Php 1500 - 4000)
- Can be used at any resolution up to the maximum supported. No image quality is lost at any resolution.
- Wide viewing angle
Some disadvantages of CRT monitors:
- Very Heavy and Large or Very Bulky
- Use Large Amounts of Energy
- Generate Excess Heat
- Emit electromagnetic radiation
- Suffers also from burn-in problems
LCD (Liquid Crystal Display) monitor is commonly used in offices, internet cafes, study rooms because of its slimness and lightness. It is also mostly preferred by health conscious because it emits lower radiation than CRT monitors.
Some advantages of LCD monitors:
- Smaller and Lighter or Not too bulky
- Energy Efficient
- Causes Less Eye Fatigues
- Flicker Free Images
- Little or No Radiation
Some disadvantages of LCD Monitor
- Blurry Images Outside Native Resolution
- Motion Blur on Fast Moving Images
- Some Models are Cheaper but they reduce the color clarity
- Expensive in high quality monitors (about Php 4000-10,000)
- Not ideal for Standard Definition (SD) videos, but great for High Definition (HD) videos