DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS
To emerge as one among the
best school of Computer Applications
recognized at the National and International levels.
To offer world class education with well qualified and experienced teachers
and state of the art infrastructure for producing graduates with Global
Competency in the field of Computer Applications.
COSMIC (Creations of STACS Members in Computers)
Editorial Board Patron
Advisor : Dr.V.Kulandaiswamy(Principal)
Staff Advisor : Mrs.M.Renukadevi(Lecturer)
Student Editors : Mr.S.Suresh , Mr.G.Rajeshkumar , Ms.Vedhapriya
Website : www.mcaatstc.com , www.stc.ac.in
THE EDITOR SPEAKS……
Today world is filled globally with many competitions, so we have to enhance major thoughts by means of complete concentration over a sounding technology in order to grasp various technical logices.Our entire field has many innovative ideas, which is very useful and attractive when we properly use it. Therefore all of us must plan towards the enchanting features of the IT by means of production new thoughts regarding to make complete technical man. This magazine informs that helps more to all in favor of buzzing all of our technical thoughts. The proverb says, “Nothing is impossible for trying heart”. We have open this issue with many enchanting and surprising factors. Within that all of you will response for this issue and for the coming issues………….
Report on process plan 2009-2010
CELLULAR NEURAL NETWORKS
A cellular neural network (CNN) is an artificial neural network which features a multi-dimensional …
CLOUD COMPUTING ARCHITECTURE
Everyone has an opinion on what is cloud computing. It can be the ability to rent a server or a thousand …
DIGITAL IMAGE PROCESSING
Pictures are the most common and convenient means of conveying or transmitting information. A picture is worth a thousand …
Mobile computing is a generic term describing one's ability to use technology while moving, as…
SUCCESS SECRETS OF TOPPERS
Planned studies, hard work and inner motivation are the keys to success. Strong willpower …..
ARTIFICIAL NEURAL NETWORK
This site is intended to be a guide on technologies of neural networks, technologies that …
3G radio transmission technologies (RTTs).
You will find the subjects covered in this section useful…
Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different…
JOB INTERVIEW TIPS
Practice answering interview questions and practice your responses to the typical job…
CELLULAR NEURAL NETWORKS
Department of Computer Applications(MCA)
A cellular neural network (CNN) is an artificial neural network which features a multi-dimensional array of neurons and local interconnections among the cells. The original CNN paradigm was first proposed by Chua and Yang in 1988. The two most fundamental ingredients of the CNN paradigm are: the use of analog processing cells with continuous signal values, and local interaction within a finite radius. A CNN is a nonlinear analog circuit which processes signals in real time. It is made of a massive aggregate of regularly spaced cloned circuit, called cells, which communicate with each other directly
only through their nearest neighbors.
Architecture of Cellular Neural Networks
Any cell in a CNN is connected only to its neighbor cells. The adjacent cells can interact directly with each other. Cells not directly connected together may affect each other indirectly because of the propagation effects of the dynamics of CNNs. An example of a two-dimensional CNN is shown below.
Every cell is influenced by a limited number of cells in its environment. This locality of connections between the units is the main difference between CNNs and other neural networks.
Large CNN chips can be implemented using VLSI techniques.
The figure above shows the emphasized cell (black) connected to the nearest neighbors (gray). The cells marked in gray represent the neighborhood cells of the black cell.The neighborhood includes the black cell itself. This is called a "3*3-neighborhood".
Similarly, we could define a "5*5-neighborhood", a "7*7-neigborhood" and so on.
The basic circuit unit of CNNs is called a cell. It contains linear and nonlinear circuit elements, which typically are linear capacitors, linear resistors, linear and nonlinear controlled sources, and independent sources. All the cells of a CNN have the same circuit structure and element values. A typical circuit of a single cell is shown in the figure below.
Each cell contains one independent voltage source E uij (Input), one independent current source I (Bias), several voltage controlled current sources Inu ij, Iny ij, and one voltage controlled voltage source Ey ij (Output). The controlled current sources Inu ij are coupled to neighbor cells via the control input voltage of each neighbor cell. Similarly, the controlled current sources Iny ij are coupled to their neighbor cells via the feedback from the output voltage of each neighbor cell.
The cell C(i,j) has direct connections to its neighbors through two kinds of weights: the feedback weights a(k,l;i,j) and the control weights b(k,l;i,j), where the index pair (k,l;i,j) represents the direction of signal from C(i,j) to C(k,l). The coefficients a (k,l;i,j) are arranged in the feedback-Template or A-Template. The coefficients b(k,l;i,j) are arranged in the control-Template or B-Template.
The A-Template and the B-Template are assumed to be the same for all the cells in the network. The global behavior of a CNN is characterized by a Template Set containing the A-Template, the B-Template, and the Bias I. If we assume a "3*3-neighborhood", the Template Set consists of 19 coefficients.
The external input to the cell is typically assumed to be constant over a certain operation interval. Therefore, the total input current to the cell is given by the weighted sum of control inputs and weighted sum of feedback outputs. In addition, a constant bias term (I) is added to the cell. Due to the capacitance C and resistance R, the state voltage x(i,j)
satisfies the following differential equation:
k denotes the neighborhood of the specific cell
Without loss of generality, the time constant T = R*C can be set to 1.
The only nonlinear element in each cell is a piecewise-linear voltage controlled voltage source with characteristic
y(i,j) = f(x(i,j)).
A widely used nonlinearity is the piecewise-linear function as given by:
y(i,j) = f(x(i,j)) = 0.5*(|x + 1| - |x - 1|)
Global behavior of Cellular Neural Networks
In image processing, n-by-m rectangular grid arrays are often used. n and m are the numbers of rows and columns, respectively. Each cell in a CNN corresponds to an element of the array.
Assuming that each cell is connected to its nearest neighbors only ("3*3-neighborhood") and that the local connections of a cell do not depend on the cell's position, the Template set contains 19 coefficients (A-Template: a1 .. a9, B-Template: b1 .. b9, Bias I). The behavior of the CNN is completely determined by this Template set.
New CNN-Templates for arbitrary tasks may be found using a training algorithm, or by defining local rules for a given global task. The local rules describe a cell's equilibrium state depending on the inputs and outputs of the neighbor cells.
The inputs and the outputs of the neighbor cells are assumed the be constant. The dynamics of the cell is not specified.
If Template values for the local rules are found, simulations are very helpful to test the dynamic global behavior of the entire clone of cells.
Optimal coefficient calculation leads to solutions which converge after short time. This means that the output of every cell reaches its final output y=+1 or y=-1 after short time.
Characteristics of a simple CNN Template
• Template set
We assume "3*3-neighborhood". A simple Template set for edge extraction is given by:
• Global task : Binary edge detection
If the input image is a binary image (black and white), the output of the CNN will be a binary image showing the edges of the input image only. If the input image has intermediate (gray) values, the operation of the CNN with this simple Template set is not well defined..
Input : U(t) = static binary image
Initial state : X(0) = arbitrary (reason: Feedback Template = 0)
Output : Y(t) converges toward a binary image showing all edges of the input image.
1. A white pixel never turns black.
2. A black pixel turns white if it is surrounded by black pixels
3. A black pixel never turns white, when at least one neighbor cell is white. In this case, this cell belongs to the edge of the object
The screenshots below show the correct behavior of the Template set.
The input picture above is the input picture of the CNN. It does not change in this small time period. The four pictures below show the dynamcal behavior of the entire grid. Starting with the initial state on the left side, the pictures show the state of the CNN cluster at t = 0.2, t = 0.5 and t = 2.
After two time units (t = 2), the output of the CNN shows the edges of the input image. The Template set produces the desired output.
CNNs can be used in many scientific applications:
In signal processing, CNNs show great promise in solving many complex problems that cannot be solved satisfactorily using conventional approaches.
• solve the maximum-likelihood estimation of signals in the presence of inter symbol interference and white Gaussian noise
In image processing that deals with gray-scale image inputs, CNNs can be applied to perform
• feature extraction & classification
• motion detection & estimation
• collision avoidance
• object counting & size estimation
• path tracking
In analyzing 3-D complex surfaces, the CNN is capable of
• detecting minima and maxima
• detecting area with gradients that exceed a given threshold
In solving partial differential equations, CNN is suitable for reducing non-visual problems to geometric maps for
• thermographic maps
• antenna-array images
• medical maps and images
Cloud Computing Architecture
Mrs. R.GUNAVATHI, HOD, MCA
Everyone has an opinion on what is cloud computing. It can be the ability to rent a server or a thousand servers and run a geophysical modeling application on the most powerful systems available anywhere. It can be the ability to rent a virtual server, load software on it, turn it on and off at will, or clone it ten times to meet a sudden workload demand. It can be storing and securing immense amounts of data that is accessible only by authorized applications and users. It can be supported by a cloud provider that sets up a platform that includes the OS, Apache, a MySQL™ database, Perl, Python, and PHP with the ability to scale automatically in response to changing workloads. Cloud computing can be the ability to use applications on the Internet that store and protect data while providing a service — anything including email, sales force automation and tax preparation. It can be using a storage cloud to hold application, business, and personal data. And it can be the ability to use a handful of Web services to integrate photos, maps, and GPS information to create a mash up in customer Web browsers.
The Nature of Cloud Computing
Cloud computing builds on established trends for driving the cost out of the delivery of services while increasing the speed and agility with which services are deployed. It shortens the time from sketching out application architecture to actual deployment. Cloud computing incorporates virtualization, on-demand deployment, Internet delivery of services, and open source software. From one perspective, cloud computing is nothing new because it uses approaches, concepts, and best practices that have already been established. From another perspective, everything is new because cloud computing changes how we invent, develop, deploy, scale, update, maintain, and pay for applications and the infrastructure on which they run. In this chapter, we examine the trends and how they have become core to what cloud computing is all about.
Virtual machines as the standard deployment object
Over the last several years, virtual machines have become a standard deployment object. Virtualization further enhances flexibility because it abstracts the hardware to the point where software stacks can be deployed and redeployed without being tied to a specific physical server. Virtualization enables a dynamic datacenter where servers provide a pool of resources that are harnessed as needed, and where the relationship of applications to compute, storage, and network resources changes dynamically in order to meet both workload and business demands. With application deployment decoupled from server deployment, applications can be deployed and scaled rapidly, without having to first procure physical servers.
Virtual machines have become the prevalent abstraction — and unit of deployment — because they are the least-common denominator interface between service providers and developers. Using virtual machines as deployment objects is sufficient for 80 percent of usage, and it helps to satisfy the need to rapidly deploy and scale applications. Virtual appliances, virtual machines that include software that is partially or fully configured to perform a specific task such as a Web or database server, further enhance the ability to create and deploy applications rapidly. The combination of virtual machines and appliances as standard deployment objects is one of the key features of cloud computing.
The on-demand, self-service, pay-by-use model
The on-demand, self-service, pay-by-use nature of cloud computing is also an extension of established trends. From an enterprise perspective, the on-demand nature of cloud computing helps to support the performance and capacity aspects of service-level objectives. The self-service nature of cloud computing allows organizations to create elastic environments that expand and contract based on the workload and target performance parameters. And the pay-by-use nature of cloud computing may take the form of equipment leases that guarantee a minimum level of service from a cloud provider. Virtualization is a key feature of this model. IT organizations have understood for years that virtualization allows them to quickly and easily create copies of existing environments —sometimes involving multiple virtual machines — to support test, development, and staging activities. The cost of these environments is minimal because they can coexist on the same servers as production environments because they use few resources. Likewise, new applications can be developed and deployed in new virtual machines on existing servers, opened up for use on the Internet, and scaled if the application is successful in the marketplace. This lightweight deployment model has already led to a “Darwinist” approach to business development where beta versions of software are made public and the market decides which applications deserve to be scaled and developed further or quietly retired. Cloud computing extends this trend through automation. Instead of negotiating with an IT organization for resources on which to deploy an application, a compute cloud is a self-service proposition where a credit card can purchase compute cycles, and a Web interface or API is used to create virtual machines and establish network relationships between them. Instead of requiring a long-term contract for services with an IT organization or a service provider, clouds work on a pay-by-use, or pay by- the-sip model where an application may exist to run a job for a few minutes or hours, or it may exist to provide services to customers on a long-term basis. Compute clouds are built as if applications are temporary, and billing is based on resource consumption: CPU hours used, volumes of data moved, or gigabytes of data stored. The ability to use and pay for only the resources used shifts the risk of how much infrastructure to purchase from the organization developing the application to the cloud provider. It also shifts the responsibility for architectural decisions from application architects to developers. This shift can increase risk, risk that must be managed by enterprises that have processes in place for a reason, and of system, network, and storage architects that needs to factor in to cloud computing designs.
Infrastructure is programmable
This shift of architectural responsibility has significant consequences. In the past, architects would determine how the various components of an application would be laid out onto a set of servers, how they would be interconnected, secured, managed, and scaled. Now, a developer can use a cloud provider’s API to create not only an application’s initial composition onto virtual machines, but also how it scales and evolves to accommodate workload changes. Consider this analogy: historically, a developer writing software using the Java™ programming language determines when it’s appropriate to create new threads to allow multiple activities to progress in parallel. Today, a developer can discover and attach to a service with the same ease, allowing them to scale an application to the point where it might engage thousands of virtual machines in order to accommodate a huge spike in demand. The ability to program application architecture dynamically puts enormous power in the hands of developers with a commensurate amount of responsibility. To use cloud computing most effectively, a developer must also be an architect, and that architect needs to be able to create a self-monitoring and self-expanding application. The developer/architect needs to understand when it’s appropriate to create a new thread versus create a new virtual machine, along with the architectural patterns for how they are inter connected. When this power is well understood and harnessed, the results can be spectacular.
A story that is already becoming legendary is Animoto’s mashup tool that creates a video from a set of images and music. The company’s application scaled from 50 to 3,500 servers in just three days due in part to an architecture that allowed it to scale easily. For this to work, the application had to be built to be horizontal scaled, have limited state, and manage its own deployment through cloud APIs. For every success story such as this, there will likely be a similar story where the application is not capable of self-scaling and where it fails to meet consumer demand. The importance of this shift from developer to developer/architect cannot be understated. Consider whether your enterprise datacenter could scale an application this rapidly to accommodate such a rapidly growing workload, and whether cloud computing could augment your current capabilities.
Cloud computing infrastructure models
There are many considerations for cloud computing architects to make when moving from a standard enterprise application deployment model to one based on cloud computing. There are public and private clouds that offer complementary benefits, there are three basic service models to consider, and there is the value of open APIs versus proprietary ones.
Public, private, and hybrid clouds
IT organizations can choose to deploy applications on public, private, or hybrid clouds, each of which has its trade-offs. The terms public, private, and hybrid do not dictate location. While public clouds are typically “out there” on the Internet and private clouds are typically located on premises, a private cloud might be hosted at a collocation facility as well.
Companies may make a number of considerations with regard to which cloud computing model they choose to employ, and they might use more than one model to solve different problems. An application needed on a temporary basis might be best suited for deployment in a public cloud because it helps to avoid the need to purchase additional equipment to solve a temporary need. Likewise, a permanent application, or one that has specific requirements on quality of service or location of data, might best be deployed in a private or hybrid cloud.
Public clouds are run by third parties, and applications from different customers are likely to be mixed together on the cloud’s servers, storage systems, and networks (Figure 3). Public clouds are most often hosted away from customer premises, and they provide a way to reduce customer risk and cost by providing a flexible, even temporary extension to enterprise infrastructure. If a public cloud is implemented with performance, security, and data locality in mind, the existence of other applications running in the cloud should be transparent to both cloud architects and end users. Indeed, one of the benefits of public clouds is that they can be much larger than a company’s private cloud might be, offering the ability to scale up and down on demand, and shifting infrastructure risks from the enterprise to the cloud provider, if even just temporarily. Portions of a public cloud can be carved out for the exclusive use of a single client, creating a virtual private datacenter. Rather than being limited to deploying virtual machine images in a public cloud, a virtual private datacenter gives customers greater visibility into its infrastructure. Now customers can manipulate not just
virtual machine images, but also servers, storage systems, network devices, and network topology. Creating a virtual private datacenter with all components located in the same facility helps to lessen the issue of data locality because bandwidth is abundant and typically free when connecting resources within the same facility.
A public cloud provides services to multiple customers, and is typically deployed at a collocation facility.
Private clouds are built for the exclusive use of one client, providing the utmost control over data, security, and quality of service (Figure 4). The company owns the infrastructure and has control over how applications are deployed on it. Private clouds may be deployed in an enterprise datacenter, and they also may be deployed at a collocation facility. Private clouds can be built and managed by a company’s own IT organization or by a cloud provider. In this “hosted private” model, a company such as Sun can install, configure, and operate the infrastructure to support a private cloud within a company’s enterprise datacenter. This model gives companies a high level of control over the use of cloud resources while bringing in the expertise needed to establish and operate the environment.
Private clouds may be hosted at a collocation facility or in an enterprise datacenter. They may be supported by the company, by a cloud provider, or by a third party such as an outsourcing firm.
Hybrid clouds combine both public and private cloud models (Figure 5). They can help to provide on-demand, externally provisioned scale. The ability to augment a private cloud with the resources of a public cloud can be used to maintain service levels in the face of rapid workload fluctuations. This is most often seen with the use of storage clouds to support Web 2.0 applications. A hybrid cloud also can be used to handle planned workload spikes. Sometimes called “surge computing,” a public cloud can be used to perform periodic tasks that can be deployed easily on a public cloud. Hybrid clouds introduce the complexity of determining how to distribute applications across both a public and private cloud. Among the issues that need to be considered is the relationship between data and processing resources. If the data is small, or the application is stateless, a hybrid cloud can be much more successful than if large amounts of data must be transferred into a public cloud for a small amount of processing.
Cloud computing benefits
• Reduce run time and response time
• Minimize infrastructure risk
• Lower cost of entry
• Increased pace of innovation
DIGITAL IMAGE PROCESSING
Pictures are the most common and convenient means of conveying or transmitting information. A picture is worth a thousand words. Pictures concisely convey information about positions, sizes and inter-relationships between objects. They portray spatial information that we can recognize as objects.
Human beings are good at deriving information from such images, because of our innate visual and mental abilities. About 75% of the information received by human is in pictorial form. In the present context, the analysis of pictures that employ an overhead perspective, including the radiation not visible to human eye are considered. Thus our discussion will be focussing on analysis of remotely sensed images. These images are represented in digital form. When represented as numbers, brightness can be added, subtracted, multiplied, divided and, in general, subjected to statistical manipulations that are not possible if an image is presented only as a photograph.