Internet and Web Engineering
Internet and Web Engineering encompasses the technologies and methodologies that facilitate the creation, maintenance, and enhancement of the Internet and the World Wide Web. The Internet serves as a global network of interconnected devices, enabling communication and data sharing across vast distances. It fundamentally transformed how people interact, conduct business, and access information.
The World Wide Web, invented by Tim Berners-Lee in 1989, utilizes hyperlinks to allow users to navigate between various content hosted on different websites, primarily formatted using Hypertext Markup Language (HTML). Over the years, numerous advancements in hardware and software have evolved, significantly impacting Internet usability and speed.
Internet communication relies on various protocols, such as the Internet Protocol (IP) and Hypertext Transfer Protocol (HTTP), which ensure data transmission between devices. The Open Systems Interconnection (OSI) model is crucial in standardizing communication and interoperability among different systems.
In addition to foundational technologies, modern web applications have emerged, enabling dynamic interactions and user engagement. This field also presents numerous career opportunities, ranging from network administration to web design, reflecting the growing demand for skilled professionals in an increasingly digital world. As Internet usage expands globally, it continues to influence economic activities, social interactions, and access to information, although challenges like cybersecurity threats and the digital divide remain significant.
Internet and Web Engineering
Summary
The term Internet is often used to describe the web of computer networks connected digitally that is accessible to the public around the world. The World Wide Web was developed by Oxford University graduate Tim Berners-Lee in 1989 while working at the European Council for Nuclear Research. The World Wide Web allows users to browse through various documents on different websites by clicking on hyperlinks located on Web pages. Numerous hardware and software advances have made the Internet an indispensable tool for business transactions and personal communication.
Definition and Basic Principles
The Internet is a type of wide area network (WAN) because digital devices of many different types can all connect to the Internet from locations all over the world. Although sharing information between computers via the network connections that comprise the Internet is free, connecting to the Internet itself is not free in North America and requires the use of a private company that functions as an Internet service provider (ISP). Many of these ISPs were originally phone companies because, during the 1990s, most connections to the Internet were made via analog dial-up connections through the phone lines made possible because of a device called a modem. A modem functions by translating the digital signal from a sending computer to the analog signals transmitted by phone lines and then back into the digital signal required by the receiving computer.
![Tim Berners-Lee, creator of the World Wide Web. By John S. and James L. Knight Foundation [CC-BY-SA-2.0 (creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons 89250498-78460.jpg](https://imageserver.ebscohost.com/img/embimages/ers/sp/embedded/89250498-78460.jpg?ephost1=dGJyMNHX8kSepq84xNvgOLCmsE2epq5Srqa4SK6WxWXS)
New hardware devices and software have continuously been developed to make sharing information via the Internet easier and faster. A language called hypertext markup language (HTML) was created to design and format the user interface to Web page content for transmission over the Internet, based on hypertext transfer protocol (HTTP). In the twenty-first century, countless Web applications are available to computer and mobile phone users worldwide.
Background and History
The precursor to the Internet was created in 1969 by the Advanced Research Projects Agency (now the Defense Advanced Research Projects Agency) of the United States Department of Defense. This project proposed a method to link the computers at several universities (University of California, Los Angeles and Santa Barbara, Stanford, and the University of Utah) to share computational data via networks. This network became known as ARPANET. In the 1980s, the International Organization for Standardization (ISO) implemented its Open Systems Interconnection–Reference Model (OSI–RM) to facilitate the interoperability of different hardware components. The creation in the 1990s of new software and programming languages, such as browsers and HyperText Markup Language (HTML), made the explosive growth of Internet activity and Web sites possible.
Along with the increasing use of microcomputers, there was also an increase in the need for trained workers because companies needed to be able to share information. This need for sharing information was one of the driving forces in the development of the Internet and the client-server model common to Web architecture, which now allows workers to share files and access centralized databases.
How It Works
To transmit data across the Internet, there must be a level of communication possible between devices, analogous to two people shaking hands. For computers, this ability to communicate is called interoperability, and it can be classified as connection-oriented or connectionless. The connection-oriented mode of communication is somewhat similar to the process of a phone call because it requires that the sender of a message wait for the intended recipient to answer before any data is sent. It is more secure, but slower, for data transmission over the Internet than the connectionless mode of data transmission. The connectionless-mode has more of a broadcast nature, with data being transmitted before any secure connection is established. The connectionless-mode is analogous to sending a letter or postcard through regular mail. Different protocols, developed according to the Institute of Electrical and Electronics Engineers (IEEE), are used for communication across the Internet, depending on whether the connection or connectionless-mode is best for a given situation.
Logical and Physical Topology Classifications. The different protocols function via alternative topologies that can be classified as logical or physical. Logical topologies describe the theoretical, but not visible, pathways for the transmission of data signals throughout the physical topologies, which describe the tangible hardware connections of the actual pieces of equipment on a network. The two logical topologies are bus and ring, and the five physical topologies are bus, star, ring, mesh, and cellular.
The Seven Layers of the OSI Web Architecture Model. The OSI model was developed to be the Web architectural model for designing components that allow communication across the Internet. It continues to be useful as a theoretical construct to facilitate the development of different pieces of software and hardware from different manufacturers according to widely accepted standards. If one single piece of hardware equipment within the network for one home or business fails, a replacement piece can be purchased, even from a different manufacturer, and successfully replace the faulty equipment. This network architecture model was developed by the ISO in 1977, primarily to provide standardization among different manufacturers and vendors to allow data communication to flow uninterrupted across the Internet through different nations with complete interoperability of the software and hardware components. Therefore, the OSI model also helps diagnose connection problems by dividing the communication between two computers connected by a network into seven layers.
The lowest level is called the physical layer because it comprises the most fundamental hardware devices connecting computers and transmitting bits, including coaxial, twisted-pair, and fiber-optic cabling and connectors. Next is the data link layer, which manages timing, flow control, error control, and encryption. It also recognizes the physical device addresses, which consist of 48 bits provided by manufacturers. Most bridges and switches repeat data across different networks according to protocols such as Ethernet through this layer. The network layer provides a data route. It uses the Internet protocol (IP) for five classes of addresses for devices based on a 32-bit logical (not physical) address. These addresses are typically assigned by an organization's network administrator. Routers consist of both hardware and software programs that can make decisions regarding the choice of routes for data transmission across different networks. The transport layer allows end-to-end network communication. It uses the transmission control protocol, which is a reliable, connection-oriented protocol that transmits data between two points at the same time. The session layer coordinates individual connection sessions and uses Structured Query Language (SQL) to retrieve information from databases. The presentation layer uses image, sound, and video formats such as graphics interchange format (GIF), musical instrument data interface (MIDI), and motion picture experts group (MPEG). It also encrypts data and compresses data for secure transmission. The top layer, the application layer, is responsible for the data dealing directly with computer users, like data type, URL, HTTP, and more. It contains the protocols for email, Web access, operating systems (OS), and file transfer protocol. These layers work together through encapsulation to make the transmission of data from one device to another device seamless.
Applications and Products
In 1990, the World Wide Web began as a public web of networks linking digital content described using HTML at different digital locations defined by a uniform resource locator (URL). Initially, the digital content was static and slow to display. To display forms that could interact with the user, Brendan Eich of Netscape developed the language initially called LiveScript in 1995, which later became widely known as JavaScript. WebAssembly, created in 2017, is a tool that provides languages like C, C++, and Rust and is designed to complement JavaScript for web browsers.
Web Applications. HTML was useful for allowing static Web pages to be displayed but not dynamic Web pages. Technologies were developed to provide dynamic Web page access, including ASP, Perl/CGI, JavaScript, and ColdFusion. ColdFusion and ASP are examples of server-side technologies, which means that the computer of an individual user (client) is not required to have any special software or hardware to access the information from a remote geographical location other than any type of browser that allows access to the Internet. JavaScript, on the other hand, is a client-side technology.
In 2020, many JavaScript developers shifted their occupation by working on angular technology because of its unique characteristics, such as two-way data binding dependency injection and typescript support. However, due to its wide framework, it is considered suitable only for big-budget applications.
Adobe ColdFusion 2023 is one example of a Web application server. Other examples among many introduced in the early twenty-first century are Common Gateway Interface (CGI), ASP.NET, Apache HTTP, and Internet Information Services. These examples of server-side technologies are more secure because individual clients do not have any direct access to the code. Java libraries can be imported into Web pages by using ColdFusion. ColdFusion's compiler can take Java byte code and directly compile it into ColdFusion code, which speeds up Web page interactions.
Linus Torvalds created Linux in 1991 while he was a student at the University of Helsinki. Linux has continued to grow in popularity as an operating system ideal for Internet applications because it is free, open-source (without any hidden or proprietary interfaces), and compatible with many devices and Web applications, including Oracle and IBM databases and many browsers. Moreover, it is powerful enough to handle many of the applications that were initially handled by the UNIX operating system. Some of the applications that use Linux include digital camera programs, sound cards, printers, modems, smartphones, and tablets. Android, one of the most popular mobile operating systems in the early twenty-first century (with around 70 percent of the market share in 2021), is Linux-based, as is most cloud computing infrastructure.
To decrease the time for Web pages to be displayed, frames were introduced in the middle 1990s to divide a document into pieces and display them gradually. Navigator 2.0 was the browser created by Netscape and the first to use both frames and JavaScript. Microsoft's Explorer was the next browser to implement frames, and the creation of Dynamic HTML (DHTML) along with Cascading Style Sheets (CSS) and the Document Object Model (DOM) in the late 1990s revolutionized Web pages by allowing parts of Web pages to be modified and also provided structures to Web pages. Hidden frames, Extensible Markup Language (XML), and ActiveX controls such as XMLHttp became widespread after 2001. An easier format is also available called JavaScript Object Notation.
These technologies were taken to the next level to make Web pages more dynamic in 2005 by Jesse James Garrett, co-founder of Adaptive Path. He called this new technology Ajax, an acronym for asynchronous JavaScript and XML. The XML parser is software that allows one to use XML to create a markup that can read and interpret XML data. Google was one of the first companies to incorporate this technology into Google Maps, Google Suggest, and Gmail Web applications, which are still in use. Many additional companies have adopted Ajax, including Amazon, Microsoft, and Yahoo. Ajax increases interactivity with Web pages for users over the traditional “click and wait” process. JavaScript embedded within a Web page sends a message to a Web server that allows data to be retrieved and transmitted to the user without waiting for the entire Web page to be displayed.
Flash is a programming language introduced by Macromedia to create dynamic images on a Web page using vector graphic images instead of the bitmap images (common in digital cameras) that are smaller and slow down the dynamic animations and graphics of a Web page. It was replaced by HTML5, and main browsers stopped supporting it after 2020. This language had the by-product stand-alone applications of Flash Player and Shockwave Flash files used to display movies on the Internet. It used a JavaScript application programming interface and CSS. CSS is used by HTML and Flash as a template to maintain a consistent appearance among web pages located on different websites.
Common Gateway Interface. Common Gateway Interface (CGI) is used to write programs that interact with Web pages to process information a person types into a form for registration, credit card purchase, or search on a website. CGI is a tool that functions as an interface between code written in Perl (Practical Extraction and Report Language) and a Web server. These CGI applications are referred to as scripts instead of programs. Because CGI can interact dynamically with Web users, it is designed to locate URLs instantly using HTTP and HTML. Embedded with CGI scripts, there are tags to create cache information so that a user can quickly return to the website. Because the general use of CGI is to process dynamic data for forms and to assist with searches through forms, it also interacts with Perl. Modern alternatives and variations to CGI include FASTCGI, PHP, Java Servlets, and Web frameworks like Ruby on Rails and Django.
Active Server Pages, ColdFusion, and PHP. The primary problem with CGI is that it requires a new and separate execution of the CGI script each time updated input is necessary. Therefore, several alternatives have been developed. One is called the Active Server Pages (ASP) engine, which is included as part of a Web server, so the extra step of accessing the Web server is not needed. In addition, ASP can be used with several programming languages, including JavaScript and Visual Basic, making ASP more versatile than CGI. ColdFusion allows functions to be programmed and then called from within a Web page, allowing more flexibility. Both ColdFusion and PHP are integrated into Web servers. PHP was developed in 1994 as the acronym for personal home page because it was used as a scripting language to facilitate the development of Web pages. PHP can be embedded within the HTML code of a Web page to make it more dynamic because the PHP code is interpreted rather than compiled. In November 2022, the PHP development team released PHP 7.4.33 with upgraded security and performance.
Careers and Course Work
This career field refers to network administrators, network engineers, systems engineers, database administrators, help desk engineers and technicians, Web designers, security analysts, programmers, and project managers. There are ample opportunities for employment as help desk engineers and technicians, Web designers, database administrators, and network administrators for individuals with an associate's degree in an appropriate field or work experience and certifications. Aspirants can also work as research analysts, network experts, and Internet sales consultants.
In addition to some vendor-specific certifications offered by Cisco, RIM, Oracle, Sun Microsystems, and Microsoft, several certifications are vendor-neutral, such as CompTIA's Network+, Security+, and iNet+. Network administrators design networks for organizations and maintain the security of these networks daily, install shared software, assign logical Internet protocol (IP) addresses, and back up data. Web designers use Web page languages such as HTML, Perl, Java, and VB.NET. Certifications in these languages indicate valuable skills as well. A bachelor's degree in computer science or a computer-related field is beneficial to secure employment as a security analyst or project manager.
Social Context and Future Prospects
The economy can be described as digital because the Internet facilitates all kinds of online financial transactions involving the everyday use of bank accounts, credit cards, airline tickets, hotel reservations, government benefits, and the stock market. These financial transactions can be completed using wired and wireless computer internetworking. A growth in identity theft, ransomware, and other digital threats and cyber crimes accompanied the rapid development of Internet applications that facilitate communication and financial transactions. Web architecture has been developed to provide convenience for consumer transactions and social networking sites, such as Meta (formerly Facebook), Instagram, and X (formerly Twitter), and provide security through firewalls, intrusion detection systems, and security software updates to prevent cyber crimes.
In the ever-evolving digital landscape, cybercriminals are constantly upgrading their technology. In response, law enforcement agencies, policymakers, businesses, and private financial institutions are in a perpetual race to update, research, and invest in countermeasures. Multifactor authentication, virtual private networks, encryption technology, machine learning algorithms, and artificial intelligence are some of the tools commonly used in the twenty-first century to detect, predict, and prevent fraud, data breaches, and other cyber crimes. This creates a variety of careers in high demand.
Despite security risks, Internet usage for personal, financial, educational, and business-related activities continues evolving in industrialized and developing nations. Between 2013 and 2023, the number of internet users grew from 2.53 billion to 5.16 billion as the cost of devices decreased and infrastructure improvements provided many rural regions with connectivity. However, much of West Africa lagged compared with the rest of the world. A lack of internet access in the twenty-first century causes a digital divide, limiting educational opportunities, connectivity speeds, career opportunities, and overall quality of life.
Artificial intelligence, automation, and virtual reality continue to reshape the healthcare industry, giving increased and improved access and patient experiences. These technologies also provide businesses immersive training, distance collaboration, and customer engagement opportunities. For individuals, the internet's evolution continues to improve safety and accessibility, increase career and professional development opportunities, and drive global connectivity.
Bibliography
Carey, Patrick. HTML5 and CSS: Comprehensive. 8th ed., Cengage, 2021.
"Computer and Information Technology Occupations." U.S. Bureau of Labor Statistics, 17 Apr. 2024, www.bls.gov/ooh/computer-and-information-technology/home.htm. Accessed 30 May 2024.
Elahi, Ata, and Alex Cushman. Computer Networks Data Communications, Internet and Security. Springer International Publishing, 2024.
Godinho, António, et al. “Computer Science and Information Technology.” Smart Objects and Technologies for Social Good, 2024, pp. 17–29. doi.org/10.1007/978-3-031-52524-7‗2.
"Key Internet Statistics to Know in 2024 (Including Mobile)." Broadband Search, 18 Apr. 2024, www.broadbandsearch.net/blog/internet-statistics. Accessed 30 May 2024.
Kundu, Kishalaya. "What Is HTML5 and Why Has It Replaced Flash and Silverlight?" Beebom Media, 2020, beebom.com/what-is-html5. Accessed 4 Jul. 2021.
Love, Chris. ASP.NET 3.5 Website Programming: Problem-Design-Solution. Wiley, 2010.
Nath, Keshab. "Evolution of the Internet from Web 1.0 to Metaverse: The Good, the Bad and the Ugly." TechRxiv Powered by IEEE, 2022, pp. 1-12. doi.org/10.36227/techrxiv.19743676.v1.
Underdahl, Brian. Macromedia Flash MX: The Complete Reference. McGraw, 2003.