Thursday, October 29, 2009

Core Banking Solutions

Core Banking Solutions

Core Banking Solutions is new jargon frequently used in banking circles. The advancement in technology, especially internet and information technology has led to new ways of doing business in banking. These technologies have cut down time, working simultaneously on different issues and increasing efficiency. The platform where communication technology and information technology are merged to suit core needs of banking is known as Core Banking Solutions. Here computer software is developed to perform core operations of banking like

recording of transactions,

passbook maintenance,

interest calculations on loans anddeposits,

customer records,

balance of payments and

withdrawal are done.

This software is installed at different branches of bank and then interconnected by means of communication lines like telephones, satellite,internet etc. It allows the user (customers) to operate accounts from any branch if it has installed core banking solutions. This new platform has changed the way banks are working.

Core banking

Core banking is all about knowing customers' needs. Provide them with the right products at the right time through the right channels 24 hours a day, 7 days a week using technology aspects like Internet, Mobile ATM.'

404

HTTP 404

From Wikipedia, the free encyclopedia

HTTP
Persistence · Compression · HTTP Secure
Headers
ETag · Cookie · Referrer · Location
Status codes
301 Moved permanently
302 Found
303 See Other
403 Forbidden
404 Not Found

The 404 or Not Found error message is a HTTP standard response code indicating that the client was able to communicate with the server but the server could not find what was requested. 404 errors should not be confused with "server not found" or similar errors, in which a connection to the destination server could not be made at all. Another similar error is "410: Gone", which indicates that the requested resource has been intentionally removed and will not be available again. A 404 error indicates that the requested resource may be available in the future.

Root Directory

In computer file systems, the root directory is the first or top-most directory in a hierarchy. It can be likened to the root of a tree - the starting point where all branches originate.

Microprocessor

A microprocessor incorporates most or all of the functions of a central processing unit (CPU) on a single integrated circuit (IC).[1] The first microprocessors emerged in the early 1970s and were used for electronic calculators, using binary-coded decimal (BCD) arithmetic on 4-bit words. Other embedded uses of 4- and 8-bit microprocessors, such as terminals, printers, various kinds of automation etc, followed rather quickly. Affordable 8-bit microprocessors with 16-bit addressing also led to the first general purpose microcomputers in the mid-1970s.

Programming language

Fourth-generation programming language

From Wikipedia, the free encyclopedia

A fourth-generation programming language (1970s-1990) (abbreviated 4GL) is a programming language or programming environment designed with a specific purpose in mind, such as the development of commercial business software[1]. In the evolution of computing, the 4GL followed the 3GL in an upward trend toward higher abstraction and statement power. The 4GL was followed by efforts to define and use a 5GL.

Bits, Bytes, Mega, Giga, Tera

Bits, Bytes, Mega, Giga, Tera

1 bit = a 1 or 0 (b)
4 bits = 1 nybble (?)
8 bits = 1 byte (B)
1024 bytes = 1 Kilobyte (KB)
1024 Kilobytes = 1 Megabyte (MB)
1024 Megabytes = 1 Gigabyte (GB)
1024 Gigabytes = 1 Terabyte (TB)

Common prefixes:
- kilo, meaning 1,000. (one thousand) 10^3 (Kilometer, 1,000 meters)
- mega, meaning 1,000,000. (one million) 10^6 (Megawatt, 1,000,000 watts)
- giga, meaning 1,000,000,000 (one billion) 10^9 (Gigawatt, 1,000,000,000 watts)
- tera, meaning 1,000,000,000,000 (one trillion) 10^12

The smallest amount of transfer is one bit. It holds the value of a 1, or a 0. (Binary coding). Eight of these 1's and zero's are called a byte.

Why eight? The earliest computers could only send 8 bits at a time, it was only natural to start writing code in sets of 8 bits. This came to be called a byte.

A bit is represented with a lowercase "b," whereas a byte is represented with an uppercase "b" (B). So Kb is kilobits, and KB is kilobytes. A kilobyte is eight times larger than a kilobit.

A simple 1 or 0, times eight of these 1's and 0's put together is a byte. The string of code: 10010101 is exactly one byte. So a small gif image, about 4 KB has about 4000 lines of 8 1's and 0's. Since there are 8 per line, that's over (4000 x 8) 32,000 1's and 0's just for a single gif image.

How many bytes are in a kilobyte (KB)? One may think it's 1000 bytes, but its really 1024. Why is this so? It turns out that our early computer engineers, who dealt with the tiniest amounts of storage, noticed that 2^10 (1024) was very close to 10^3 (1000); so based on the prefix kilo, for 1000, they created the KB. (You may have heard of kilometers (Km) which is 1000 meters). So in actuality, one KB is really 1024 bytes, not 1000. It's a small difference, but it adds up over a while.

The MB, or megabyte, mega meaning one million. Seems logical that one mega (million) byte would be 1,000,000 (one million) bytes. It's not however. One megabyte is 1024 x 1024 bytes. 1024 kilobytes is called one Megabyte. So one kilobyte is actually 1024 bytes, and 1024 of those is (1024 x 1024) 1048576 bytes. In short, one Megabyte is really 1,048,576 bytes.

There is a difference of about 48 KB, which is a decent amount. If you have a calculator, you will notice that there is actually a 47KB difference. There is a difference of 48,576 bytes, divided by 1024, and you get the amount of real kilobytes... 47.4375

All of this really comes into play when you deal with Gigabytes, or roughly one billion bytes. One real Gigabyte is actually 1024 bytes x 1024 bytes x 1024 bytes...1,073,741,824. However, most people like to simplify this by simply saying that one Gigabyte is only 1,000,000,000 (one billion) bytes; which makes sense because the prefix Giga means one billion.

References

ISPs

List of ISPs in India

From Wikipedia, the free encyclopedia

Internet service providers in India.

Generations of Computer

Sixth generation

This generation saw a move towards PC-like architectures in gaming consoles, as well as a shift towards using DVDs for game media. This brought games that were both longer and more visually appealing. Furthermore, this generation also saw experimentation with online console gaming and implementing both flash and hard drive storage for game data.

  • Sega's Dreamcast released in North America on September 9, 1999 was the company's last video game console, and was the first of the generation's consoles to be discontinued. Sega implemented a special type of optical media called the GD-ROM. These discs were created in order to prevent software piracy, which had been more easily done with consoles of the previous generation; however, this format was soon cracked as well. It was discontinued in 2001, and Sega transitioned to software developing/publishing only. It also sported a 33.6Kb modem which could be used to access the internet or play some of the games, like Phantasy Star Online, online.
  • Sony's PlayStation 2 was released in North America on October 26, 2000 as the follow-up to its highly successful PlayStation, and was also the first home game console to be able to play DVDs. As was done with the original PlayStation in 2000, Sony redesigned the console in 2004 into a smaller version. As of July 2008, 140 million PlayStation 2 units have been sold.[7][8] This makes it the best selling console of all time to date.
  • The Nintendo GameCube was Nintendo's fourth home video game console and the first console by the company to use optical media instead of cartridges. The Nintendo GameCube did not play standard 12 cm DVDs, instead employing smaller 8 cm optical discs.
  • Microsoft's Xbox was the company's first video game console. The first console to employ a hard drive right out of the box to save games, and had similar hardware specifications to a low-end desktop computer at the time of its release. Though criticized for its bulky size, which was easily twice that of the competition, as well as for the awkwardness of the original controller that shipped with it, it eventually gained popularity due in part to the success of the Halo franchise.

VLSI

Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistor-based circuits into a single chip. VLSI began in the 1970s when complex semiconductor andcommunication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into billions of transistors.

Generations of Computer

Source--http://www.webopedia.com/didyouknow/Hardware_Software/2002/FiveGenerations.asp

The Five Generations of Computers
Last updated: August 28, 2009

The history of computer development is often referred to in reference to the different generations of computing devices. Each generation of computer is characterized by a major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices.

Read about each generation and the developments that led to the current devices that we use today.

First Generation (1940-1956) Vacuum Tubes
The first computers used vacuum tubes for circuitry and
magnetic drums for memory, and were often enormous, taking up entire rooms. They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions.

First generation computers relied on machine language, the lowest-level programming language understood by computers, to perform operations, and they could only solve one problem at a time. Input was based on punched cards and paper tape, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Second Generation (1956-1963) Transistors
Transistors replaced vacuum tubes and ushered in the second generation of computers. The transistor was invented in 1947 but did not see widespread use in computers until the late 1950s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.

Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.

The first computers of this generation were developed for the atomic energy industry.

Third Generation (1964-1971) Integrated Circuits
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.

Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.

Fourth Generation (1971-Present) Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits were built onto a single silicon chip. What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the computerfrom the central processing unit and memory to input/output controlson a single chip.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUIs, the mouse and handheld devices.

Fifth Generation (Present and Beyond) Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today. The use of parallel processing and superconductors is helping to make artificial intelligence a reality. Quantum computation and molecular and nanotechnology will radically change the face of computers in years to come. The goal of fifth-generation computing is to develop devices that respond to natural language input and are capable of learning and self-organization.

DID YOU KNOW...
An integrated circuit (IC) is a small electronic device made out of a semiconductor material. The first integrated circuit was developed in the 1950s by Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor.

Third generation

Third generation

The mass increase in the use of computers accelerated with 'Third Generation' computers. These generally relied on Jack St. Clair Kilby's invention of the integrated circuit (or microchip), starting around 1965. However, the IBM System/360 used hybrid circuits, which were solid-state devices interconnected on a substrate with discrete wires.

The first integrated circuit was produced in September 1958 but computers using them didn't begin to appear until 1963. Some of their early uses were in embedded systems, notably used by NASA for the Apollo Guidance Computer and by the military in the LGM-30 Minuteman intercontinental ballistic missile.

By 1971, the Illiac IV supercomputer, which was the fastest computer in the world for several years, used about a quarter-million small-scale ECL logic gate integrated circuits to make up sixty-four parallel data processors.[1]

While large 'mainframes' such as the System/360 increased storage and processing capabilities, the integrated circuit also allowed the development of much smaller computers. The minicomputer was a significant innovation in the 1960s and 1970s. It brought computing power to more people, not only through more convenient physical size but also through broadening the computer vendor field. Digital Equipment Corporation became the number two computer company behind IBM with their popular PDP and VAX computer systems. Smaller, affordable hardware also brought about the development of important new operating systems like Unix.

Large scale integration of circuits led to the development of very small processing units, an early example of this is the processor was the classified CADC used for analyzing flight data in the US Navy's F14A Tomcatfighter jet. This processor was developed by Steve Geller, Ray Holt and a team from AiResearch and American Microsystems.

In 1966, Hewlett-Packard entered the general purpose computer business with its HP-2116, offering a computational power formerly found only in much larger computers. It supported a wide variety of languages, among them BASIC, ALGOL, and FORTRAN.

Hybrid computers

Hybrid computers are computers that exhibit features of analog computers and digital computers. The digital component normally serves as the controller and provides logical operations, while the analog component normally serves as a solver of differential equations.

Pivot Table

A pivot table is a data summarization tool found in data visualization programs such as spreadsheets (e.g. Microsoft Excel, OpenOffice.org Calc, Lotus 1-2-3). Among other functions, they can automatically sort, count, and total the data stored in one table or spreadsheet and create a second table displaying the summarized data. Pivot tables are also useful for quickly creating cross tabs. The user sets up and changes the summary's structure by dragging and dropping fields graphically. This "rotation" or pivoting of the summary table gives the concept its name. The term pivot table is a generic phrase used by multiple vendors. However, the specific form PivotTable is a trademark of the Microsoft Corporation

RAM

Random-access memory (usually known by its acronym, RAM) is a form of computer data storage. Today, it takes the form of integrated circuits that allow stored data to be accessed in any order (i.e., at random). The word random thus refers to the fact that any piece of data can be returned in a constant time, regardless of its physical location and whether or not it is related to the previous piece of data

Assembly languages


was used in programming the first computers

Assembly languages
are a family of low-level languages for programming computers, microprocessors, microcontrollers, and other (usually) integrated circuits. They implement a symbolic representation of the numeric machine codes and other constants needed to program a particular CPU architecture. This representation is usually defined by the hardware manufacturer, and is based on abbreviations (called mnemonics) that help the programmer remember individual instructions, registers, etc. An assembly language is thus specific to a certain physical or virtual computer architecture (as opposed to most high-level languages, which are usually portable).