Genetic Algorithms Based Compressive Sensing Framework Psychology Essay

Compressive sensing, also known as compressive sampling and sparse sampling is technique for acquiring and reconstructing the image exploiting the fact that the image is sparse in some domain and a recent finding that a small collection of linear measurements of an image carry reasonably enough information for its reconstruction.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

In a compressive sensing framework, a single pixel captures the whole image repeatedly by projecting an image on tiny array of mirrors which either turn on or off (see Figure 1). As we know that images are sparse in some domain [1]. For example, transforming the image by DCT (Discrete Cosine Transform) or wavelet transform, the energy compaction happens and the most of the energy of the original image is contained in fewer

transform coefficients. In this way the cost of many pixels for capturing an image is saved and camera in effect is a single pixel camera. This cost is matter of more importance in case of ultra band or tera-hertz imaging cameras because of high cost of the sensors involved.

In tradition image compression framework, the image is captured with CCD array, compressed by using some transform and the transform coefficients which are fewer than the original total number of pixels in the image are sent over the communication channel to the receiver side. As we drop some insignificant coefficients during compression process, so idea in CS is that we should we sample all the pixels in the image at first place.

Therefore, in CS weighted linear combination of image samples called compressive measurements are taken in a basis different from the basis in which the signal is known to be sparse. In [3] Donoho et. al showed that the number of these compressive measurements may be fewer yet still contain all the useful information. The task of recovering the image back involves solving an underdetermined matrix equation since the numbers of compressive measurements taken are smaller than the number of pixels in the full image. However, utilizing the constraint that the initial signal is sparse in some domain enables to solve this underdetermined system of linear equations.

Based on the literature available, the recovery of the image has been commonly formularized as L1 norm minimization problem. Needell and Tropp in [6] went about combining L0 and L2 optimizations using an iterative technique developed called Compressive Sampling Matching Pursuit to achieve better performance. Another method was introduced by Candes et. Al in [7] that uses L1-minimization and weighted parameters in conjunction with an iterative search where weights for the next iteration are determined from the value of current one. In [8], the authors employed total variation minimization by using Augmented Lagrangian to solve the optimization problem. In [9], Wakin et. Al presented an algorithm and hardware to support compressive imaging for video representation.

Genetic algorithms have been successfully employed to image processing and compression tasks. In [10], the authors present a method that uses genetic algorithms to speed up computation time in fractal image compression. The compression is achieved by encoding all regions in the image with different size blocks. Moveable genes were used to improve the computing effect of the algorithm. In [11], Yimin et. Al employed GA for image compression based on vector quantization coding technique. GA is used to find an optimal codebook.

The field of evolutionary computation hasn�t been explored widely in the literature in CS to the best of out knowledge, though a similar framework was briefly introduced in [12] using a population based optimization technique called Particle Swarm Optimization (PSO). POS is a GA-like iterative algorithm first introduced by Eberhart and Kennedy in [13, 14] for training of artificial neural network weights. This technique tries to model a real-world analogy seen in the social behavior and the interaction between organisms for sharing information. By using a fitness function that takes an individual and evaluates how it fits to problem criteria using the following two functions for individual velocity:

Where v and x are individual velocity and position; w, ?, and ? are learning parameters selected based on the problem; b and g are best individual and global positions; r?(@i) and r?(@j) is a random number between 0 and 1. This work inspired by the natural phenomenal of a flock of birds searching an area for food where they observe each other velocity and position to determine where food reside. In the David B. et al work, POS was used to locate the sparse solution using a population of random solution (Particles). They were able to recover several images with different sparsity levels.

Unlike genetic algorithms, POS method does not use any evolution operators like mutation and crossover, which is the case we investigated in this work. We believe that studying the effect of mutation and crossover using L1 optimization approach could shed some light on this field and might lead to a novel way of approaching the problem.

As shown in Figure 3, the decoder (receiver) side receives the observation value from the encoder (transmitter) side. On the other hand inner product is taken between the chromosome and the random matrix representing the mirror array. The objective is to minimize the difference between these two values subjected to a constraint. The constraint is the minimization of non-zero coefficients in the transformed domain representation of the image. This constraint comes from the fact that images are sparse in some domain. Here we have exploited the sparsity in DCT domain.

Although the GA approach for image recovery problem in this scenario seems very time costly way, the recent reconfigurable platforms are promising for time efficiency. Hardware implementations of GA are very efficient because of the parallel architecture and GA capability to parallelize. Fernando et. Al implemented general purpose genetic algorithms core in FPGA (Field programmable gate array) suitable for real time applications [15]. Their core is customizable for population size, number of generations, cross-over and mutation operators. And fitness functions. Hardware implementation of a GA benefits in terms of elimination of the need for complex time-and resource consuming communication protocols needed by an equivalent software implementation [15]. Similarly [16] proposed a hardware implementation of GA. For the hardware architecture, they develop a random number generator (RNG), crossover, and mutation and their structure can dynamically perform 3 types of chromosome encoding: binary encoding, real-value encoding, and integer encoding

In this work we have presented our approach to recover the image from fewer observations in compressive sensing paradigm. The qualitative and quantitative results demonstrate that standard GA is successful in finding a solution which is reasonably accurate representation of the recovered image.

Although our experiments are limited in the sense that we have evaluated the effectiveness of the method for only binary images, however the promising results of GA motivate us to extend our representation to grayscale and color images. As a future work, we would like to explore some two-dimensional genetic operators since our data (i.e. image) is 2-D and hence it may be more suitable to use these kind of cross-over and mutation operators.

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
The price is based on these factors:
Academic level
Number of pages
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our Guarantees

Money-back Guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism Guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision Policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy Policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation Guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more