Category Archives: System Networking

What is Cloud Computing really means?

Cloud computing is a general term for anything that involves delivering hosted services over the Internet. These services are broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). The name cloud computing was inspired by the cloud symbol that’s often used to represent the Internet in flowcharts and diagrams.

Cloud computing comes into focus only when you think about what IT always needs: a way to increase capacity or add capabilities on the fly without investing in new infrastructure, training new personnel, or licensing new software. Cloud computing encompasses any subscription-based or pay-per-use service that, in real time over the Internet, extends IT’s existing capabilities.

There’s a good chance you’ve already used some form of cloud computing. If you have an e-mail account with a Web-based e-mail service like Hotmail, Yahoo! Mail or Gmail, then you’ve had some experience with cloud computing. Instead of running an e-mail program on your computer, you log in to a Web e-mail account remotely. The software and storage for your account doesn’t exist on your computer — it’s on the service’s computer cloud.


Cloud Computing Architecture
What makes up a cloud computing system?

When talking about a cloud computing system, it’s helpful to divide it into two sections: the front end and the back end. They connect to each other through a network, usually the Internet. The front end is the side the computer user, or client, sees. The back end is the “cloud” section of the system.

The front end includes the client’s computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.

On the back end of the system are the various computers, servers and data storage systems that create the “cloud” of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server. A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other. Most of the time, servers don’t run at full capacity.

If a cloud computing company has a lot of clients, there’s likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients’ information stored. That’s because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients’ information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called redundancy.

Service Models

Cloud computing providers offer their services according to three fundamental models: Infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) where IaaS is the most basic and each higher model abstracts from the details of the lower models.


Infrastructure as a Service (IaaS)
In this most basic cloud service model, cloud providers offer computers – as physical or more often as virtual machines –, raw (block) storage, firewalls, load balancers, and networks. IaaS providers supply these resources on demand from their large pools installed in data centers. Local area networks including IP addresses are part of the offer. For the wide area connectivity, the Internet can be used or – in carrier clouds – dedicated virtual private networks can be configured.

To deploy their applications, cloud users then install operating system images on the machines as well as their application software. In this model, it is the cloud user who is responsible for patching and maintaining the operating systems and application software. Cloud providers typically bill IaaS services on a utility computing basis, that is, cost will reflect the amount of resources allocated and consumed.

Platform as a Service (PaaS)
In the PaaS model, cloud providers deliver a computing platform and/or solution stack typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers. With some PaaS offers, the underlying compute and storage resources scale automatically to match application demand such that the cloud user does not have to allocate resources manually.

Software as a Service (SaaS)
In this model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. The cloud users do not manage the cloud infrastructure and platform on which the application is running. This eliminates the need to install and run the application on the cloud user’s own computers simplifying maintenance and support. What makes a cloud application different from other applications is its elasticity. This can be achieved by cloning tasks onto multiple virtual machines at run-time to meet the changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user who sees only a single access point. To accomodate a large number of cloud users, cloud applications can be multitenant, that is, any machine serves more than one cloud user organization. It is common to refer to special types of cloud based application software with a similar naming convention: desktop as a service, business process as a service, Test Environment as a Service, communication as a service.

Cloud storage advantages

  • Companies need only pay for the storage they actually use as it is also possible for companies by utilizing actual virtual storage features like thin provisioning.
  • Companies do not need to install physical storage devices in their own datacenter or offices, but the fact that storage has to be placed anywhere stays the same (maybe localization costs are lower in offshore locations).
  • Storage maintenance tasks, such as backup, data replication, and purchasing additional storage devices are offloaded to the responsibility of a service provider, allowing organizations to focus on their core business, but the fact stays the same that someone has to pay for the administrative effort for these tasks
  • Cloud storage provides users with immediate access to a broad range of resources and applications hosted in the infrastructure of another organization via a web service interface.

Six Benefits of Cloud Computing :

  • Reduced Cost

Cloud technology is paid incrementally, saving organizations money.

  • Increased Storage

Organizations can store more data than on private computer systems.

  • Highly Automated

No longer do IT personnel need to worry about keeping software up to date.

  • Flexibility

Cloud computing offers much more flexibility than past computing methods.

  • More Mobility

Employees can access information wherever they are, rather than having to remain at their desks.

  • Allows IT to Shift Focus

No longer having to worry about constant server updates and other computing issues, government organizations will be free to concentrate on innovation.

 

 

source: wikipedia

Advertisements

ARP Cache

Address Resolution Protocol (ARP) is a telecommunications protocol used for resolution of network layer addresses into link layer addresses, a critical function in multiple-access networks.
ARP has been implemented in many combinations of network and overlaying internetwork technologies, such as IPv4, Chaosnet, DECnet and Xerox PARC Universal Packet (PUP) using IEEE 802 standards, FDDI, X.25, Frame Relay and Asynchronous Transfer Mode (ATM), IPv4 over IEEE 802.3 and IEEE 802.11 being the most common cases.In Internet Protocol Version 6 (IPv6) networks, the functionality of ARP is provided by the Neighbor Discovery Protocol (NDP).

An ARP – Address Resolution Protocol is used to translate an IP address into MAC address. There are two type of ARP messages – ARP request that is broadcast to all the systems in a LAN segment and the ARP reply that is unicast to the requesting station alone. ARP messages contain source and destination IP addresses & MAC addresses (if available), among other information.

The Address Resolution Protocol is used within a single LAN segment and cannot be routed across a different network. An Gratuitous ARP message is broadcast to all the systems of a LAN segment when a system is just starting up or when the IP address/ MAC address of system has changed. This enables the computers in a LAN to update their ARP cache tables appropriately. This message does not solicit a response. In IPv6, there is a protocol called Neighbor Discovery Protocol (NDP) that does the same function as the ARP in IPv4.

The process behind one computer (C1) wants to communicate with another computer (C2) in a LAN segment:
When it comes to the Layer 2 communications between networked systems, IP address is not used. So, within a LAN segment computers identify each other and communicate with each other using the MAC Address. So, when Computer (C1) gets the target IP address of the Computer (C2) it wants to communicate with,

  •     It first looks at its own ARP cache (which is a table that contains the IP addresses and their corresponding MAC addresses for computers/ systems within a network) to see if it already has the MAC address for the computer (C2), it wants to communicate with.
  •     If the MAC address of C2 is present in its ARP cache table, it can then append the message with the corresponding MAC address and send it over the network (cable, switch).
  •     If the MAC address of C2 is not present in its ARP cache table, C1 will broadcast an ARP request message to all the computers / systems in the network indicating that it wants the MAC address for the IP address in its possession.
  •     This ARP request is received by all the systems in the network, but only the computer with the target IP address (C2) responds to C1 with an ARP reply message, indicating its MAC address.
  •     Now since C1 has both IP address and MAC address of C2, it will communicate with C2 using this information. In the process, both C1 and C2 update their ARP cache tables with the newly acquired information so that the next time the ARP broadcast message can be avoided.


ARP Cache/Cache Table
Since computers cannot send broadcast messages every time they need to connect with another network device, they store the IP addresses and the corresponding MAC addresses of systems they frequently communicate with, in a table called ARP Cache table. All the systems in the LAN maintain this table. The entries in the ARP cache table are generally short lived and are updated every 15-20 minutes.

Since a LAN segment consists of a number of computing devices, some individual ARP table entries are removed if the system doesn’t communicate with certain devices for considerable amount of time. This is done mainly to limit the size of ARP cache.

What's the difference between a Mac Address and an IP Address?

MAC and IP addresses are both key components to network, but they serve different purposes, and are visible in very different ways.

A MAC (or Machine Access Control) address is best thought of as a unique serial number assigned to every network interface on every device. And by unique, I do mean unique; no two network cards anywhere should have the same MAC address.

MAC addresses are 6-byte (48-bits) in length, and are written in MM:MM:MM:SS:SS:SS format. The first 3-bytes are ID number of the manufacturer, which is assigned by an Internet standards body. The second 3-bytes are serial number assigned by the manufacturer.

Operating Systems support various command-line and GUI utilities to allow users to find MAC address of the system. On Unix variants including Solaris and Linux support “ifconfig -a”, “ip link list” or “ip address show” command that displays MAC address of the network device among other useful information. You can see your network interfaces MAC addresses using the command prompt in Windows including NT, 2000, XP and 2003 support “ipconfig /all” command that displays MAC address. On a MacOS, one can find MAC address by opening “System Preferences”, then selecting “Network”.

For example the result using command prompt in windows:
Ethernet adapter Local Area Connection 2:
.
.
Physical Address. . . . . . . . . : 00-22-FA-5A-B4-C2

An IP address is assigned to every device on a network so that device can be located on the network. The internet is just a network after all, and every device connected to it has an IP address so that it can be located. For example, The IP address is at 172.16.254.1. That number is used by the network routing equipment so that when you ask for a page from the site, that request is routed to the right server.

The computers or equipment you have connected to the internet are also assigned IP addresses. If you’re directly connected, your computer will have an IP address that can be reached from anywhere on the internet. If you’re behind a router, that router will have that internet-visible IP address, but it will then set up a private network that your computer is connected to, assigning IP addresses out of a private range that is not directly visible on the internet. All internet traffic must go through the router, and will appear on the internet to have come from that router.

The following is an illustration or metaphors of between IP and MAC address:
An IP Address is kind of like your postal address. Anyone who knows your postal address can send you a letter. That letter may travel a simple or complex route to get to you, but you don’t care as long as it makes it.
The same is true of packets of data traveling on a network like the internet. The IP address indicates where a packet is destined, and the system takes care of getting it there. A letter may or may not also have a return address so you know who to write back to – a TCP/IP address always has a return IP address.

A MAC Address is kind of like the color, size and shape of your physical mail box. It’s enough that the postal carrier (your network router) can identify it, but it’s unique to you, there’s no reason that anyone other than your postal carrier might care what it is, and you can change it by getting a new mailbox (network card) at any time and slapping your name (IP address) on it without affecting your delivery.

As Summary the difference between MAC and IP Address:
1. MAC address is supposedly unique to each network interface card while an IP address is usually replaced
2. An IP address reveals which element on which network it is while the same cannot be extracted from a MAC address
3. MAC is one of the security methods in WiFi
4. Both IP and MAC addresses can still be spoofed or copied

source: ask-leo! by Notenboom

IEEE 802.11 for WLAN

IEEE 802.11 is a set of standards for implementing wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands. They are created and maintained by the IEEE LAN/MAN Standards Committee (IEEE 802). The base version of the standard IEEE 802.11-2007 has had subsequent amendments. These standards provide the basis for wireless network products using the Wi-Fi brand.The 802.11 family consists of a series of over-the-air modulation techniques that use the same basic protocol. The most popular are those defined by the 802.11b and 802.11g protocols, which are amendments to the original standard.
802.11 and 802.11x refers to a family of specifications developed by the IEEE for wireless LAN (WLAN) technology. 802.11 specifies an over-the-air interface between a wireless client and a base station or between two wireless clients. 802.11 technology has its origins in a 1985 ruling by the U.S. Federal Communications Commission that released the ISM band for unlicensed use.Vic Hayes, who held the chair of IEEE 802.11 for 10 years and has been called the “father of Wi-Fi” was involved in designing the initial 802.11b and 802.11a standards within the IEEE.

There are several specifications in the 802.11 family:

  • 802.11 — applies to wireless LANs and provides 1 or 2 Mbps transmission in the 2.4 GHz band using either frequency hopping spread spectrum (FHSS) or direct sequence spread spectrum (DSSS).
  • 802.11a — an extension to 802.11 that applies to wireless LANs and provides up to 54-Mbps in the 5GHz band. 802.11a uses an orthogonal frequency division multiplexing encoding scheme rather than FHSS or DSSS.
  • 802.11b (also referred to as 802.11 High Rate or Wi-Fi) — an extension to 802.11 that applies to wireless LANS and provides 11 Mbps transmission (with a fallback to 5.5, 2 and 1-Mbps) in the 2.4 GHz band. 802.11b uses only DSSS. 802.11b was a 1999 ratification to the original 802.11 standard, allowing wireless functionality comparable to Ethernet.
  • 802.11e — a wireless draft standard that defines the Quality of Service (QoS) support for LANs, and is an enhancement to the 802.11a and 802.11b wireless LAN (WLAN) specifications. 802.11e adds QoS features and multimedia support to the existing IEEE 802.11b and IEEE 802.11a wireless standards, while maintaining full backward compatibility with these standards.
  • 802.11g — applies to wireless LANs and is used for transmission over short distances at up to 54-Mbps in the 2.4 GHz bands.
  • 802.11n — 802.11n builds upon previous 802.11 standards by adding multiple-input multiple-output (MIMO). The additional transmitter and receiver antennas allow for increased data throughput through spatial multiplexing and increased range by exploiting the spatial diversity through coding schemes like Alamouti coding. The real speed would be 100 Mbit/s (even 250 Mbit/s in PHY level), and so up to 4-5 times faster than 802.11g.
  • 802.11r –  802.11r, also called Fast Basic Service Set (BSS) Transition, supports VoWi-Fi handoff between access points to enable VoIP roaming on a Wi-Fi network with 802.1X authentication.
  • 802.1X — Not to be confused with 802.11x (which is the term used to describe the family of 802.11 standards) 802.1X is an IEEE standard for port-based Network Access Control that allows network administrators to restricted use of IEEE 802 LAN service access points to secure communication between authenticated and authorized devices.
802.11 network standards
802.11
protocol
Release Freq.
(GHz)
Bandwidth
(MHz)
Data rate per stream
(Mbit/s)
Allowable
MIMO streams
Modulation Approximate indoor range Approximate outdoor range
(m) (ft) (m) (ft)
Jun 1997 2.4 20 1, 2 1 DSSS, FHSS 20 66 100 330
a Sep 1999 5 20 6, 9, 12, 18, 24, 36, 48, 54 1 OFDM 35 115 120 390
3.7[A] 5,000 16,000[A]
b Sep 1999 2.4 20 5.5, 11 1 DSSS 35 115 140 460
g Jun 2003 2.4 20 6, 9, 12, 18, 24, 36, 48, 54 1 OFDM, DSSS 38 125 140 460
n Oct 2009 2.4/5 20 7.2, 14.4, 21.7, 28.9, 43.3, 57.8, 65, 72.2[B] 4 OFDM 70 230 250 820[8]
40 15, 30, 45, 60, 90, 120, 135, 150[B] 70 230 250 820[8]
ac (DRAFT) Nov. 2011 5 80 433, 867 8
160 867, 1.73 Gbit/s, 3.47 Gbit/s, 6.93 Gbit/s

Current 802.11 standards define “frame” types for use in transmission of data as well as management and control of wireless links.

Frames are divided into very specific and standardized sections. Each frame consists of a MAC header, payload and frame check sequence (FCS). Some frames may not have the payload. The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. The frame control field is further subdivided into the following sub-fields:

  • Protocol Version: two bits representing the protocol version. Currently used protocol version is zero. Other values are reserved for future use.
  • Type: two bits identifying the type of WLAN frame. Control, Data and Management are various frame types defined in IEEE 802.11.
  • Sub Type: Four bits providing addition discrimination between frames. Type and Sub type together to identify the exact frame.
  • ToDS and FromDS: Each is one bit in size. They indicate whether a data frame is headed for a distribution system. Control and management frames set these values to zero. All the data frames will have one of these bits set. However communication within an IBSS network always set these bits to zero.
  • More Fragments: The More Fragments bit is set when a packet is divided into multiple frames for transmission. Every frame except the last frame of a packet will have this bit set.
  • Retry: Sometimes frames require retransmission, and for this there is a Retry bit which is set to one when a frame is resent. This aids in the elimination of duplicate frames.
  • Power Management: This bit indicates the power management state of the sender after the completion of a frame exchange. Access points are required to manage the connection and will never set the power saver bit.
  • More Data: The More Data bit is used to buffer frames received in a distributed system. The access point uses this bit to facilitate stations in power saver mode. It indicates that at least one frame is available and addresses all stations connected.
  • WEP: The WEP bit is modified after processing a frame. It is toggled to one after a frame has been decrypted or if no encryption is set it will have already been one.
  • Order: This bit is only set when the “strict ordering” delivery method is employed. Frames and fragments are not always sent in order as it causes a transmission performance penalty.

source: wikipedia

What is Ethernet 802.3

Ethernet is a family of computer networking technologies for local area networks (LANs) commercially introduced in 1980. Standardized in IEEE 802.3, Ethernet has largely replaced competing wired LAN technologies. IEEE 802.3 is a working group and a collection of IEEE standards produced by the working group defining the physical layer and data link layer’s media access control (MAC) of wired Ethernet.

IEEE 802.3 Frame Format
IEEE 802.3 is a format frame which is the result of a merger of the specification IEEE 802.2 and IEEE 802.3, and consists of a header and a trailer IEEE of 802.3 and an IEEE 802.2 header.

Structure of data
An IEEE 802.3 frame consists of several fields as follows:

IEEE 802.3 header:

  •         Preamble
  •         Start Delimiter
  •         Destination Address
  •         Source Address
  •         Length

Header IEEE 802.2 Logical Link Control :

  •         Destination Service Access Point (DSAP)
  •         Source Service Access Point (SSAP)
  •         Control

    Payload
    IEEE 802.3 Trailer:

  •         Frame Check Sequence (FCS)


Ethernet frames
A data packet on the wire is called a frame. A frame begins with preamble and start frame delimiter, followed by an Ethernet header featuring source and destination MAC addresses. The middle section of the frame consists of payload data including any headers for other protocols (e.g., Internet Protocol) carried in the frame. The frame ends with a 32-bit cyclic redundancy check, which is used to detect corruption of data in transit.

Varieties of Ethernet
The Ethernet physical layer evolved over a considerable time span and encompasses quite a few physical media interfaces and several magnitudes of speed. The most common forms used are 10BASE-T, 100BASE-TX, and 1000BASE-T. All three utilize twisted pair cables and 8P8C modular connectors. They run at 10 Mbit/s, 100 Mbit/s, and 1 Gbit/s, respectively. Fiber optic variants of Ethernet offer high performance, electrical isolation and distance (tens of kilometers with some versions). In general, network protocol stack software will work similarly on all varieties.

Ethernet protocols refer to the family of local-area network (LAN) covered by the IEEE 802.3. In the Ethernet standard, there are two modes of operation: half-duplex and full-duplex modes. In the half duplex mode, data are transmitted using the popular Carrier-Sense Multiple Access/Collision Detection (CSMA/CD) protocol on a shared medium. The main disadvantages of the half-duplex are the efficiency and distance limitation, in which the link distance is limited by the minimum MAC frame size. This restriction reduces the efficiency drastically for high-rate transmission. Therefore, the carrier extension technique is used to ensure the minimum frame size of 512 bytes in Gigabit Ethernet to achieve a reasonable link distance.

Four data rates are currently defined for operation over optical fiber and twisted-pair cables:

  • 10 Mbps – 10Base-T Ethernet (IEEE 802.3)
  • 100 Mbps – Fast Ethernet (IEEE 802.3u)
  • 1000 Mbps – Gigabit Ethernet (IEEE 802.3z)
  • 10-Gigabit – 10 Gbps Ethernet (IEEE 802.3ae).

As with all IEEE 802 protocols, the ISO data link layer is divided into two IEEE 802 sublayers, the Media Access Control (MAC) sublayer and the MAC-client sublayer. The IEEE 802.3 physical layer corresponds to the ISO physical layer.

The MAC sub-layer has two primary responsibilities:

  • Data encapsulation, including frame assembly before transmission, and frame parsing/error detection during and after reception
  • Media access control, including initiation of frame transmission and recovery from transmission failure

The MAC-client sub-layer may be one of the following:

  • Logical Link Control (LLC), which provides the interface between the Ethernet MAC and the upper layers in the protocol stack of the end station. The LLC sublayer is defined by IEEE 802.2 standards.
  • Bridge entity, which provides LAN-to-LAN interfaces between LANs that use the same protocol (for example, Ethernet to Ethernet) and also between different protocols (for example, Ethernet to Token Ring). Bridge entities are defined by IEEE 802.1 standards.

Protocol Structure – Ethernet: IEEE 802.3 Local Area Network protocolsThe basic IEEE 802.3 Ethernet MAC Data Frame for 10/100Mbps Ethernet:

7 1 6 6 2 46-1500bytes 4
Pre SFD DA SA Length Type Data unit + pad FCS
  • Preamble (PRE)– 7 bytes. The PRE is an alternating pattern of ones and zeros that tells receiving stations that a frame is coming, and that provides a means to synchronize the frame-reception portions of receiving physical layers with the incoming bit stream.
  • Start-of-frame delimiter (SFD)– 1 byte. The SOF is an alternating pattern of ones and zeros, ending with two consecutive 1-bits indicating that the next bit is the left-most bit in the left-most byte of the destination address.
  • Destination address (DA)– 6 bytes. The DA field identifies which station(s) should receive the frame..
  • Source addresses (SA)– 6 bytes. The SA field identifies the sending station.
  • Length/Type– 2 bytes. This field indicates either the number of MAC-client data bytes that are contained in the data field of the frame, or the frame type ID if the frame is assembled using an optional format.
  • Data– Is a sequence of n bytes (46=< n =<1500) of any value. (The total frame minimum is 64bytes.)
  • Frame check sequence (FCS)– 4 bytes. This sequence contains a 32-bit cyclic redundancy check (CRC) value, which is created by the sending MAC and is recalculated by the receiving MAC to check for damaged frames.

MAC Frame with Gigabit Ethernet Carrier Extension (IEEE 803.3z)

1000Base-X has a minimum frame size of 416bytes, and 1000Base-T has a minimum frame size of 520bytes. The Extension is a non-data variable extension field to frames that are shorter than the minimum length.

7

1

6

6

2

  Variable

4

Variable

Pre SFD DA SA Length Type Data unit + pad FCS Ext

source: global networking sites

 

Linux – An Anternative Operating System?

Linux is a Unix-like computer operating system assembled under the model of free and open source software development and distribution. Linux was originally developed as a free operating system for Intel x86-based personal computers. It has since been ported to more computer hardware platforms than any other operating system. Linux is the kernel of an operating system. Linux was built on the Unix tradition. Linux was originally developed by Linus Torwalds of Finland, who currenctly owns the Linux trademark. Linux stands for Linus’ Unix. Using the open source code of the Linux kernel, people have been developing operating systems based on the Linux kernel. These are called the “Linux distributions”.

In 1991, in Helsinki, Linus Torvalds began a project that later became the Linux kernel. It was initially a terminal emulator, which Torvalds used to access the large UNIX servers of the university.

Linux and most GNU software are licensed under the GNU General Public License (GPL). The GPL requires that anyone who distributes Linux must make the source code (and any modifications) available to the recipient under the same terms. Typically Linux is packaged in a format known as a Linux distribution for desktop and server use. Some popular mainstream Linux distributions include Debian (and its derivatives such as Ubuntu), Fedora and openSUSE.


Linux Advantages

  •  Low cost: You don’t need to spend time and money to obtain licenses since Linux and much of its software come with the GNU General Public License. You can start to work immediately without worrying that your software may stop working anytime because the free trial version expires. Additionally, there are large repositories from which you can freely download high quality software for almost any task you can think of.
  • Stability: Linux doesn’t need to be rebooted periodically to maintain performance levels. It doesn’t freeze up or slow down over time due to memory leaks and such. Continuous up-times of hundreds of days (up to a year or more) are not uncommon.
  • Performance: Linux provides persistent high performance on workstations and on networks. It can handle unusually large numbers of users simultaneously, and can make old computers sufficiently responsive to be useful again.
  • Network friendliness: Linux was developed by a group of programmers over the Internet and has therefore strong support for network functionality; client and server systems can be easily set up on any computer running Linux. It can perform tasks such as network backups faster and more reliably than alternative systems.
  • Flexibility: Linux can be used for high performance server applications, desktop applications, and embedded systems. You can save disk space by only installing the components needed for a particular use. You can restrict the use of specific computers by installing for example only selected office applications instead of the whole suite.
  • Compatibility: It runs all common Unix software packages and can process all common file formats.
  • Choice: The large number of Linux distributions gives you a choice. Each distribution is developed and supported by a different organization. You can pick the one you like best; the core functionalities are the same; most software runs on most distributions.
  • Fast and easy installation: Most Linux distributions come with user-friendly installation and setup programs. Popular Linux distributions come with tools that make installation of additional software very user friendly as well.
  • Full use of hard disk: Linux continues work well even when the hard disk is almost full.
  • Multitasking: Linux is designed to do many things at the same time; e.g., a large printing job in the background won’t slow down your other work.
  • Security: Linux is one of the most secure operating systems. “Walls” and flexible file access permission systems prevent access by unwanted visitors or viruses. Linux users have to option to select and safely download software, free of charge, from online repositories containing thousands of high quality packages. No purchase transactions requiring credit card numbers or other sensitive personal information are necessary.
  • Open Source: If you develop software that requires knowledge or modification of the operating system code, Linux’s source code is at your fingertips. Most Linux applications are Open Source as well.

User interface
Users operate a Linux-based system through a command line interface (CLI), a graphical user interface (GUI), or through controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default mode is usually a graphical user interface, by which the CLI is available through terminal emulator windows or on a separate virtual console.

On desktop systems, the most popular user interfaces are the extensive desktop environments KDE Plasma Desktop, GNOME, and Xfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply called “X”. It provides network transparency and permits a graphical application running on one system to be displayed on another where a user may interact with the application.

A Linux distribution, commonly called a “distro”, is a project that manages a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such as dpkg, Synaptic, YAST, or Portage to install, remove and update all of a system’s software from one central location.

Programming on Linux

Most Linux distributions support dozens of programming languages. The original development tools used for building both Linux applications and operating system programs are found within the GNU toolchain, which includes the GNU Compiler Collection (GCC) and the GNU build system.  Most distributions also include support for PHP, Perl, Ruby, Python and other dynamic languages. While not as common, Linux also supports C# (via Mono), Vala, and Scheme.

Linux distributions have long been used as server operating systems, and have risen to prominence in that area. Linux distributions are the cornerstone of the LAMP server-software combination (Linux, Apache, MySQL, Perl/PHP/Python) which has achieved popularity among developers, and which is one of the more common platforms for website hosting.

Cost
The following distributions are available for free (without cost): aLinux, Alpine Linux, ALT Linux, Annvix, Arch Linux, Ark Linux, Asianux, BLAG Linux and GNU, Bodhi Linux, Caixa Mágica, CentOS, CRUX, Damn Small Linux, Debian, DeLi Linux, Devil-Linux, dyne:bolic, EasyPeasy, Edubuntu, Elive, EnGarde Secure Linux, Fedora, Finnix, Foresight Linux, Freespire, Frugalware, Gentoo, gNewSense, gnuLinEx, GoboLinux, Gobuntu, Impi Linux, Kanotix, Knoppix, KnoppMyth, Kubuntu, Kurumin, Linux Mint, Lunar Linux, Micro Core Linux, Mageia. MintPPC, Musix GNU/Linux, Network Security Toolkit, NimbleX, NUbuntu, openSUSE, Pardus, Parsix, PCLinuxOS, Puppy Linux, Sabayon Linux, Scientific Linux, sidux, Slackware, Slax, SliTaz GNU/Linux, Source Mage GNU/Linux, Symphony OS, SYS, Tiny Core Linux, Tor-ramdisk, Trustix, Ubuntu, Ututo, Super OS, Xubuntu, XBMC Live, Yoper, Zenwalk and OpenWrt.

The following distributions have several editions, some of which are without cost and some of which do cost money: ClearOS, Mandriva Linux, MEPIS and Red Flag Linux.

The following distributions cost money: Novell Open Enterprise Server, Red Hat Enterprise Linux, Rxart,SUSE Linux Enterprise,

The following distribution had at least one version that used to cost money: Caixa Mágica (now freely-available), Elive (now freely-available), Xandros (discontinued), Linspire (discontinued).

Commands
For POSIX compliant (or partly compliant) systems like FreeBSD, Linux, Mac OS X or Solaris, the basic commands are the same because they are standardized.

description FreeBSD Linux Mac OS X Solaris Windows (cmd) Windows (powershell) Windows (cygwin, SFU or MKS)
list directory ls ls ls ls dir dir & ls & Get-ChildItem ls
clear console clear clear clear clear cls clear clear
copy file(s) cp cp cp cp copy cp & Copy-Item cp
move file(s) mv mv mv mv move mv & Move-Item mv
rename file(s) mv mv, rename mv mv ren, rename ren, mv mv
delete file(s) rm rm rm rm del (erase) rm & Remove-Item rm
delete directory rmdir rmdir rmdir rmdir rd (rmdir) rmdir rmdir
create directory mkdir mkdir mkdir mkdir md (mkdir) mkdir mkdir
change current directory cd cd cd cd cd (chdir) cd & Set-Location cd
run shell script with new shell sh file.sh sh file.sh sh file.sh sh file.sh cmd /c file.cmd ? sh file.sh
kill processes kill, killall killall, pkill, kill, skill kill, killall kill, pkill taskkill taskkill kill
change process priority nice nice, chrt nice nice start /low, start /normal, start /high, start /realtime ? nice
change io priority [c 1] ionice nice[c 2] ? ? ? ?
create file system newfs mkfs mkfs newfs format ? ?
file system check and recovery fsck fsck fsck fsck chkdsk ? ?
create software raid atacontrol, gmirror, zfs create (mdadm—create) diskutil appleRAID metainit, zfs create diskpart (mirror only) diskpart (mirror only) ?
mount device mount mount mount, diskutil mount mount mountvol mount & New-PSDrive ?
unmount device umount umount umount, diskutil unmount(disk) umount mountvol /d Remove-PSDrive ?
mount file as block device mdconfig + mount mount -o loop hdid lofiadm + mount ? ? ?
show network configuration ifconfig ip addr, ifconfig ifconfig ifconfig ipconfig ipconfig ?
show network route route ip route route route route ? ?
trace network route traceroute traceroute traceroute traceroute tracert tracert ?
trace network route with pings traceroute -I traceroute -I & mtr traceroute -I traceroute -I pathping pathping ?
description FreeBSD Linux Mac OS X Solaris Windows (cmd) Windows (powershell) Windows (cygwin, SFU or MKS)

Popular distributions

Well-known Linux distributions include:

  • Arch Linux, a minimalist rolling release distribution targetted at experienced Linux users, maintained by a volunteer community and primarily based on binary packages in the tar.gz and tar.xz format.
  • Debian, a non-commercial distribution maintained by a volunteer developer community with a strong commitment to free software principles
  • Fedora, a community distribution sponsored by Red Hat
    • Red Hat Enterprise Linux, which is a derivative of Fedora, maintained and commercially supported by Red Hat.
      • CentOS, a distribution derived from the same sources used by Red Hat, maintained by a dedicated volunteer community of developers with both 100% Red Hat-compatible versions and an upgraded version that is not always 100% upstream compatible
      • Oracle Enterprise Linux, which is a derivative of Red Hat Enterprise Linux, maintained and commercially supported by Oracle.
    • Mandriva, a Red Hat derivative popular in France and Brazil, today maintained by the French company of the same name.
      • PCLinuxOS, a derivative of Mandriva, grew from a group of packages into a community-spawned desktop distribution.
  • Gentoo, a distribution targeted at power users, known for its FreeBSD Ports-like automated system for compiling applications from source code
  • openSUSE a community distribution mainly sponsored by Novell.
  • Slackware, one of the first Linux distributions, founded in 1993, and since then actively maintained by Patrick J. Volkerding.
  • Damn Small Linux, “DSL” is a Biz-card Desktop OS

DistroWatch attempts to include every known distribution of Linux, whether currently active or not; it also maintains a ranking of distributions based on page views, as a measure of relative popularity.

A Live Distro or Live CD is a Linux distribution that can be booted from a compact disc or other removable medium (such as a DVD or USB flash drive) instead of the conventional hard drive. Some minimal distributions such as tomsrtbt can be run directly from as little as one floppy disk without needing to change the system’s hard drive contents.Many popular distributions come in both “Live” and conventional forms (the conventional form being a network or removable media image which is intended to be used for installation only). This includes SUSE, Ubuntu, Linux Mint, MEPIS, Sidux, and Fedora. Some distributions, such as Knoppix, Devil-Linux, SuperGamer, and dyne:bolic are designed primarily for Live CD, Live DVD, or USB flash drive use.

Difference Between Linux and UNIX

UNIX is copyrighted name only big companies are allowed to use the UNIX copyright and name, so IBM AIX and Sun Solaris and HP-UX all are UNIX operating systems. Most UNIX systems are commercial in nature.

Linux is a UNIX Clone. But if you consider Portable Operating System Interface (POSIX) standards then Linux can be considered as UNIX.

Linux is just a kernel. All Linux distributions includes GUI system + GNU utilities (such as cp, mv, ls,date, bash etc) + installation & management tools + GNU c/c++ Compilers + Editors (vi) + and various applications (such as OpenOffice, Firefox). However, most UNIX operating systems are considered as a complete operating system as everything come from a single source or vendor.Linux is just a kernel and Linux distribution makes it complete usable operating systems by adding various applications. Most UNIX operating systems comes with A-Z programs such as editor, compilers etc. For example HP-UX or Solaris comes with A-Z programs.

Linux is Free (as in beer [freedom]). You can download it from the Internet or redistribute it under GNU licenses. You will see the best community support for Linux. Most UNIX like operating systems are not free (but this is changing fast, for example OpenSolaris UNIX). However, some Linux distributions such as Redhat / Novell provides additional Linux support, consultancy, bug fixing, and training for additional fees.

Linux is considered as most user friendly UNIX like operating systems. It makes it easy to install sound card, flash players, and other desktop goodies. However, Apple OS X is most popular UNIX operating system for desktop usage.

Linux comes with open source netfilter/iptables based firewall tool to protect your server and desktop from the crackers and hackers. UNIX operating systems comes with its own firewall product (for example Solaris UNIX comes with ipfilter based firewall) or you need to purchase a 3rd party software such as Checkpoint UNIX firewall.

UNIX and Linux comes with different set of tools for backing up data to tape and other backup media. However, both of them share some common tools such as tar, dump/restore, and cpio etc.

File Systems

  •     Linux by default supports and use ext3 or ext4 file systems.
  •     UNIX comes with various file systems such as jfs, gpfs (AIX), jfs, gpfs (HP-UX), jfs, gpfs (Solaris).

System Administration Tools

  •     UNIX comes with its own tools such as SAM on HP-UX.
  •     Suse Linux comes with Yast
  •     Redhat Linux comes with its own gui tools called redhat-config-*.

System Startup Scripts
Almost every version of UNIX and Linux comes with system initialization script but they are located in different directories:

  •     HP-UX – /sbin/init.d
  •     AIX – /etc/rc.d/init.d
  •     Linux – /etc/init.d

UNIX Operating System Names
A few popular names:

  1.     HP-UX
  2.     IBM AIX
  3.     Sun Solairs
  4.     Mac OS X
  5.     IRIX

Linux Distribution (Operating System) Names
A few popular names:

  1.     Redhat Enterprise Linux
  2.     Fedora Linux
  3.     Debian Linux
  4.     Suse Enterprise Linux
  5.     Ubuntu Linux

Common Things Between Linux & UNIX
Both share many common applications such as:

  •     GUI, file, and windows managers (KDE, Gnome)
  •     Shells (ksh, csh, bash)
  •     Various office applications such as OpenOffice.org
  •     Development tools (perl, php, python, GNU c/c++ compilers)
  •     Posix interface

A Sample UNIX Desktop Screenshot

A Sample Linux Desktop Screenshot

References:
http://en.wikipedia.org/wiki/List_of_Linux_distributions

What I need to know about Active Directory (AD)

What is Active Directory?

Active Directory is a database that keeps track of all the user accounts and passwords in your organization. It allows you to store your user accounts and passwords in one protected location, improving your organization’s security.

Active Directory is subdivided into one or more domains. A domain is a security boundary. Each domain is hosted by a server computer called a domain controller (DC). A domain controller manages all of the user accounts and passwords for a domain.

Domains and the Domain Name System (DNS)

Domains are named using the Domain Name System (DNS). If your company is called ACME Corporation your DNS name would be (for example) acme.com. This is the top-level domain name for your company. The security domain in Active Directory maps directly to the DNS domain name.

For larger organizations you can subdivide Active Directory into child domains (based on on geography for example). If ACME Corporation has three divisions named West, Central, and East, the sub-domains can have the DNS names west.acme.com, central.acme.com, and east.acme.com.

Each domain requires a server computer. In the above scenario you would need at least four servers to host Active Directory as follows:

  •     acme.com
  •     west.acme.com
  •     central.acme.com
  •     east.acme.com

Active Directory, also referred as an AD, originally created in the year 1996, it was first used with Windows 2000 Server as a directory service for Windows domain networks. Active Directory is a special purpose database, which serves as a central location for authenticating and authorizing all the users and computers within a network. Active Directory uses the Lightweight Directory Access Protocol (LDAP), an application protocol used for accessing and maintaining directory information services distributed over an IP network.

The basic internal structure of the Active Directory consists of a hierarchical arrangement of Objects which can be categorized broadly into resources and security principles. Some of the examples of Active Directory objects are users, computers, groups, sites, services, printers, etc. Every Object is considered as a single entity with some specific set of attributes. The attributes of Objects along with the kind of objects that can be stored in the AD are defined by a Schema.

The intrinsic framework of Active Directory is divided into a number of levels on the basis of visibility of objects. An AD network can be organized in four types of container structure namely, Forest, Domains, Organizational Units and Sites.

  •     Forests: It is a collection of AD objects, their attributes and set of attribute syntax.
  •     Domain: Domain is a collection of computers objects in the AD which share a common set of policies, a name and a database of their members.
  •     Organizational Units: OUs are containers in which domains are grouped. They are used to create a hierarchy for the domain to resemble the structure of the Active Directory’s company in organizational terms.
  •     Sites: Sites are independent of domains and OU structure and are considered as physical groups defined by one of more IP subnets. They are used to distinguish between locations connected by low- and high-speed connections.

Active Directory Domain Services

Active Directory Domain Services (AD DS), formerly known as Active Directory Domain Services, is the central location for configuration information, authentication requests, and information about all of the objects that are stored within your forest. Using Active Directory, you can efficiently manage users, computers, groups, printers, applications, and other directory-enabled objects from one secure, centralized location.

Active Directory Rights Management Services

Your organization’s intellectual property should  be safe and highly secure. Active Directory Rights Management Services (AD RMS), a component of Windows Server 2008 R2, is available to help make sure that only those individuals who need to view a file can do so. AD RMS can protect a file by identifying the rights that a user has to the file. Rights can be configured to allow a user to open, modify, print, forward, or take other actions with the rights-managed information. With AD RMS, you can now safeguard data when it is distributed outside of your network.

Active Directory Federation Services

Active Directory Federation Services is a highly secure, highly extensible, and Internet-scalable identity access solution that allows organizations to authenticate users from partner organizations. Using AD FS in Windows Server 2008 R2, you can simply and very securely grant external users access to your organization’s domain resources. AD FS can also simplify integration between untrusted resources and domain resources within your own organization.

Active Directory Certificate Services

Most organizations use certificates to prove the identity of users or computers, as well as to encrypt data during transmission across unsecured network connections. Active Directory Certificate Services (AD CS) enhances security by binding the identity of a person, device, or service to their own private key. Storing the certificate and private key within Active Directory helps securely protect the identity, and Active Directory becomes the centralized location for retrieving the appropriate information when an application places a request.

Active Directory Lightweight Directory Services

Active Directory Lightweight Directory Service (AD LDS), formerly known as Active Directory Application Mode, can be used to provide directory services for directory-enabled applications. Rather than using your organization’s AD DS database to store the directory-enabled application data, AD LDS can be used to store in its place. Two components work in conjunction to provide you a central location for security accounts (AD DS) and another location to support the application configuration and directory data (AD LDS). You can also reduce the overhead associated with Active Directory replication, without extending the Active Directory schema to support the application, and you can partition the directory structure so that the AD LDS service is only deployed to the servers that need to support the directory-enabled application.

The advantages of Active Directory for managing user accounts:

1. It will provide fully integrated security in the form of user logon’s and authentication.
2. It makes easy in administration in the form of group policies and permissions.
3. It makes easy to identify the resources.
4. It will provide scalability, flexibility and extentiability.
5. It is tightly integrated with DNS services for all its operations, which will provide better in identifications and migrations.
6. It services will provide Automatic replication of information between the domain controllers.
7. It supports integration of the other directory services also.
8. It supports multiple authentication protocols.

Users container within Active Directory

Figure 1. Users container within Active Directory

Builtin container within Active Directory

Figure 2. Builtin container within Active Directory

There are plenty of built-in groups to choose from. There are some groups which are used for administration of Active Directory, services, and other important directory service features. These groups are located in the Users container, as shown in Figure 1. These groups include:

  •     Cert Publishers
  •     DNSAdmins
  •     Domain Admins
  •     DHCP Admins
  •     Enterprise Admins
  •     Group Policy Creator Owners
  •     Schema Admins

These groups are essential for Active Directory and should be used to provide administrative control over these areas. It is not really possible to use Delegation to replace the functions that these groups provide.

Another category of built-in groups fall under a different place in the Active Directory. They are located in the Builtin container, as shown in Figure 2. These groups include:

  •     Administrators
  •     Account Operators
  •     Backup Operators
  •     Server Operators
  •     Print Operators

The built-in groups have a very distinct scope. They are designed to be used on the domain controllers and the domain controllers only. We know this because all of these groups are Domain Local (Local in Windows NT). This means that they are to be used to provide privileges to administrators that need to perform tasks on the domain controllers.

Another way to confirm this is that each local Security Accounts Manager (SAM) on the clients and servers have their own local built-in groups to perform these duties. The Administrators and Backup Operators groups are in every SAM. The other groups are not needed on the local SAM, because the Administrators group or Power Users group provides the privilege to accomplish the associated tasks on a client or server.

It is important to not only know the scope of these built-in groups, but also the capabilities of these groups. Table 1 lists what each group can do.

 

Administrators

Account Operators

Backup Operators

Print Operators

Server Operators

Create, delete, and manage user and group accounts

X

X

Read all user information

X

X

X

Reset password for user accounts

X

X

Share directories

X

X

Create, delete, and manage printers

X

X

X

Backup files and directories

X

X

X

Restore files and directories

X

X

X

Log on locally

X

X

X

X

X

Shut down the system

X

X

X

X

X

Table 1: Privileges of built-in groups in Active Directory

As you scan through the capabilities that the members of the built-in groups have, keep in mind that these capabilities have the scope of all domain controllers in the domain, as well as all objects within the domain. Therefore, if you add a user to one of these groups, you can’t scale down their scope of influence.

For example, it is common to want to have a junior administrator or the helpdesk staff to reset passwords for users in the domain. With the built-in groups, you would simply add them to the Account Operators group to accomplish this. However, take a look at the other privileges that this membership provides them. They can also perform all of the following tasks:

  •     Create, delete, and manage user accounts
  •     Create, delete, and manage group accounts
  •     Log on locally
  •     Shut down the system

As you can see, these additional privileges vastly expand the scope of influence compared to the original desire to just have the administrators reset passwords.

Another key point about our example is to consider which user accounts they would be able to reset the password for. If you give a user membership in the Account Operators group, they will be able to reset the password for the following users:

  •     Administrator account
  •     All IT staff
  •     Executives
  •     HR personnel

source: microsoft & windowsecurity

Everything You Need to Know about Networking Basics

Networking Basics: Switches

Switches are used to connect multiple devices on the same network within a building or campus. For example, a switch can connect your computers, printers and servers, creating a network of shared resources. The switch, one aspect of your networking basics, would serve as a controller, allowing the various devices to share information and talk to each other. Through information sharing and resource allocation, switches save you money and increase productivity.

There are two basic types of switches to choose from as part of your networking basics: managed and unmanaged.

  1. An unmanaged switch works out of the box and does not allow you to make changes. Home-networking equipment typically offers unmanaged switches.
  2. A managed switch allows you access to program it. This provides greater flexibility to your networking basics because the switch can be monitored and adjusted locally or remotely to give you control over network traffic, and who has access to your network.

sample of 8-port ethernet switch device

Networking Basics: Routers

Routers, the second valuable component of your networking basics, are used to tie multiple networks together. For example, you would use a router to connect your networked computers to the Internet and thereby share an Internet connection among many users. The router will act as a dispatcher, choosing the best route for your information to travel so that you receive it quickly.

Routers analyze the data being sent over a network, change how it is packaged, and send it to another network, or over a different type of network. They connect your business to the outside world, protect your information from security threats, and can even decide which computers get priority over others.

Depending on your business and your networking plans, you can choose from routers that include different capabilities. These can include networking basics such as:

  •  Firewall: Specialized software that examines incoming data and protects your business network against attacks
  •  Virtual Private Network (VPN): A way to allow remote employees to safely access your network remotely
  •  IP Phone network : Combine your company’s computer and telephone network, using voice and conferencing technology, to simplify and unify your communications
sample of router in the market

sample of router in the market

ITIL – IT Infrastructure Library

What is ITIL?

The ITIL (IT Infrastructure Library) is the most widely adopted approach for IT Service Management in the world.  It provides a practical, no-nonsense framework for identifying, planning, delivering and supporting IT services to the business.

ITIL advocates that IT services must be aligned to the needs of the business and underpin the core business processes. It provides guidance to organizations on how to use IT as a tool to facilitate business change, transformation and growth.

The ITIL best practices are currently detailed within five core publications which provide a systematic and professional approach to the management of IT services, enabling organizations to deliver appropriate services and continually ensure they are meeting business goals and delivering benefits.

ITIL consists of 6 sets: Service Support; Service Delivery; Planning to Implement Service Management; ICT Infrastructure Management; Applications Management; The Business Perspective.

ITIL has had a long history of development, and many IT professionals believe that ITIL grew out of the yellow books, which were best practices and guidelines that were used in IBM during the 1980′s, however it wasn’t until the middle of the 1990′s that ITIL become a formal library of IT best practice frameworks. The newest version of ITIL (version 3) is set to be released in May of 2007. The ITIL v3 has been anticipated by many IT professionals all over the world for the last few years. It is expected that five core texts will be packaged in the publication, they include: Service Strategy, Service Design, Service Transition, Service Operation and Continual Service Improvement.Further updates were made in Summer 2011 to correct errors, respond to reviewer feedback, remove inconsistencies and improve clarity and structure.

Successful introduction of IT Service Management with ITIL should deliver the following benefits:

  •     Improved customer satisfaction through a more professional approach to service delivery
  •     Improved IT services through the use of proven best practice processes
  •     Improved ROI of  IT
  •     Improved delivery of third party services through the specification of ITIL
  •     Improved morale of service delivery and recipient staff
  •     Increased competence, capability and productivity of IT staff
  •     Increased staff retention
  •     Reduced cost of training
  •     Improved systems/ applications availability
  •     Reduced cost/ incident
  •     Reduced hidden costs that traditionally increases substantially the TCO
  •     Better asset utilisation
  •     A clear business differentiator from competitors
  •     Closely aligned to commercial business services and products
  •     Greater visibility of IT costs
  •     Greater visibility of IT assets
  •     A benchmark to measure performance against in IT projects or services
  •     Reduced cost of recruitment and training – hiring ITIL qualified people is easier

ITIL gives an adaptive and flexible framework for managing IT services and encourages you to use common sense rather than follow a rigid set of rules. ITIL will create a common understanding between your IT staff, suppliers, contractors and users within the business by creating a common approach and language towards IT services.