Sunday, December 15, 2013

Scientific Computing: Bioinformatics

What is bioinformatics? Bioinformatics is an interdisciplinary field of computer technology to the management of biological information that develops and improves on methods for analyzing, storing, and retrieving biological data. It has been started since 1968. In the past three decades, bioinformatics had seen extraordinary development and use in many areas of computer science, mathematics, engineering, and life sciences to process biological data. However, the term bioinformatics is recently invention, and not appearing in the literature until early 1990s. It’s also known as bioinformatics computational biology.

Bioinformatics plays a few important roles such as in the textual mining of the development of biological and gene ontologies; and in analysis of gene and protein structures. However, bioinformatics favor of the development of algorithms, theory and statistical techniques and calculations to solve the problem stems from the need to manage and analyze biological data. Bioinformatics tools aid in the comparison of genetic and genomic data. The comparison of the genes in the same species or between different species can show similarities of protein functions, or relations between the phylogenetic species.



The goal of bioinformatics is to determine the sequence of the entire human genome. Bioinformatics is a powerful new technology for the efficient management, analysis, and search of bio-medical data. The main concern of the bioinformatics is the use of mathematical tools to extract useful information from the chaotic data was collected using data mining technology.

Bioinformatics now entails the creation and advancement of databases, algorithms, computational and statistical techniques, and theory to solve formal and practical problems arising from the management and analysis of biological data





Friday, December 6, 2013

Computer Graphics: Image processing

What is image processing? Images are everywhere, from those we take with our digital cameras and/or mobile devices and share with our family members and friends to those that we see in the movies and receive from Mars, as well as the whole ensemble of images of our bodies that are taken at dentist office or in hospital visits. Image processing is the art of working with such images. From making it possible to capture, transmit, and store them as blurry and dark images for analyzing the medical data to recognizing our family members and friends’ faces in social pictures. 

There were two robot geologists were launched to Mars in summer 2003 and landed in January 2004. After over eight Earth years, one is still operating. While searching for liquid water on Mars, they took pictures and sent back to Earth so NASA scientists could analyze. How could they do that? They put a camera on the robot, and the pictures were sent to NASA after they were taken. The computer scientists used image management technology such as metadata to interpret, analyze, and store those pictures. Using metadata, they could define the width, height, size, file name and directory, and the date the pictures were taken. They converted between Mars and Earth time. They used Java Message Service for synchronous and asynchronous messaging.



Pics from NASA

Saturday, November 30, 2013

Communications and Security: Data Communications

Five basic components make up telecommunication architecture. They are terminal, processor, channel, computer, and control software. Terminal is where input signals are made. Processor converts signals from analog to digital (and back to the receiving end). Channel is the cable that transmits signals from one end to the other. Computer does the communication task, specifically running the control software; which, in turn, handles network activities and functionality.

Some widely known telecommunication networks are the Internet and the telephone system. The Internet is a network of computers (also act as terminal most of the cases) communicating with each other via TCP/IP protocol. Each connected terminal is given an IP address to identify itself within the network. In addition, TCP/IP is the method which data is communicating between terminals.

Lately, there is a new technology for telecommunications networks called Multi-protocol Label Switching (MPLS). MPLS is a mechanism in high-performance telecommunications networks that directs data from one network node to the next base on short path labels rather than long network addresses, avoiding complex lookups in a routing table. MPLS can encapsulate packets of various network protocols. MPLS supports a range of access technologies, including T1/E1, ATM, Frame Relay, and DSL.

The need to carry voice using the Internet gives rise to VoIP (voice over the Internet). Now that mobile phones are commonplace, the competition for a single network to carry voice and data is bustling.


From what-when-how.com

Saturday, November 23, 2013

Artificial Intelligence

Allen Newell and Herbert Simon are the first two people who started researching about Artificial Intelligence (AI) since 1950s. But when talking about Artificial Intelligence, some people may ask what is it? Even though most of us might have watched some AI movies such as I Robot or Terminator, but the terminology of AI is still new to some of us. Nowadays, there are more advancement in computer technology that computer scientists could actually create a few amazing tools such as robotic surgical system, da Vinci, made by the Intuitive Surgical and self-driving car made by Google. Lisp and Prolog were used mainly for AI technology in the beginning, but later these two programming languages were using for other purposes as well.


http://www.coterouen.fr/2011/04/21/la-clinique-mathilde-acquiert-un-robot-chirurgien-dernier-cri/

Da Vinci uses the latest in surgical and robotic technologies available today. The da Vinci robots operate in hospitals worldwide, most commonly for kidney, hysterectomies and prostate removals. The da Vinci surgical system requires a human operator, which most likely is the surgeon. It enables surgeons to perform complex and delicate operations. There are a few key components such as an ergonomically console where the surgeon sits while operating, a high-definition 3D vision system that the surgeon looks through two eye holes of the procedure, a four interactive arms with foot pedals. There are two hand controllers that the surgeon uses while operating. 

Surgeon arms

Surgeon view

The self-driving cars were made recently, but I see them very often on my ways to work and home on high way 280, though. However, there was always a driver behind the wheel, and I have never seen one that driving without the driver. Hope, I can see it someday.

The future self-driving cars

Both da Vinci system and self-driving car came out for a while, but these two new technology are still new to people. Most of us still somewhat are not used to them yet and afraid to take the risk for using them.

Sunday, November 10, 2013

History of Computer Science: HTML5

HTML5 is very popular nowadays in the web development community. But, before introducing about HTML5, I would like to talk a little bit about its history. Berners-Lee wrote the first HTML page in 1989. For its first five years between 1990 – 1995, it went through a number of revisions and extensions, primarily hosted first at CERN and then at the IETF. Berners-Lee found the W3C to standardize HTML for all browsers to follow. HTML started small and got bigger. In 1995, HTML 2 and 3 were released then made way to a more pragmatic approach known as HTML 3.2, which was completed in 1997 by W3C. HTML 4.0 quickly followed later that same year. In 1999, HTML 4.01 came out.

The following year, the W3C membership decided to stop evolving HTML and instead begin work on a XML-based equivalent, called XHTML. This effort started with a reformulation of HTML4 in XML, known as XHTML 1.0, which was completed in 2000.

In 2004, the idea that HTML's evolution should be reopened was tested at W3C. Mozilla and Opera presented the early draft proposal covering just forms-related features and some principles that underlie the HTML5, but the proposal was rejected. And W3C membership wanted to continue developing XML-based.

Browser developers got frustrated with W3C progress and created a new standard committee to write the HTML5 specification. Shortly after that, Mozilla, Apple, and Opera jointly announced their intent to continue working under a new venue called the WHATWG. In 2006, the W3C expressed an interest to participate in the development of HTML5 after all. And in 2007, W3 formed a working group chartered to work with the WHATWG on the development of the HTML5 specification, which added:
  • Form input type
  • Audio/video
  • Data storage
  • 2d/3d Graphics
  • Drag-and-drop
  • and much more


The HTML 5 DOCTYPY is now the most common, used on over 40% of pages.

DOCTYPE is a piece of HTML code that says which version of HTML is being used at the top of the HTML page. For example,

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"> // HTML 4.01 declaration
<!DOCTYPE html> // HTML5 declaration

Since different browsers support different audio/video formats, HTML5 lets us specify multiple sources for the audio/video elements, so browsers can use the format that works for them.
For example, below is how we code in HTML5. Very simple and clean, isn't it?

<!DOCTYPE HTML>
<html>
<body>

<video width="320" height="240" controls>

  <source src="movie.mp4" type="video/mp4">
  <source src="movie.ogg" type="video/ogg">
  Your browser does not support the video tag.
</video>

</body>

</html>

<!-- Old HTML version for one kind of video -->
<object width="420" height="360" classid="clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B" codebase="http://www.apple.com/qtactivex/qtplugin.cab">
   <param name="src" value="movie.mp4">
   <param name="controller" value="true">
</object>

Friday, November 8, 2013

File sharing

File sharing is the practice of distributing or providing access to digitally stored information such as audios, videos, documents, etc. File sharing is very convenient for people to share files with family members, friends and/or classmates/teachers. There are many ways to share files across the internet such as FTP, SFTP, peer-to-peer file sharing, etc. However, most of people are intended to share music files which have copyright protected.


In 1999, Napster was the first company provided peer-to-peer file sharing system. But, couple years later it was sued and lost by the A&M Records company, which caused Napster to shut down to comply with a court order. This drove millions of users to other peer-to-peer applications and file sharing continued its growth. And there were a few peer-to-peer applications came out such as LimeWire, Kazaa, BitTorrent, isoHunt, etc. Some of them are still active until now.  

Picture from buydig.com


The legal debate surrounding file sharing has caused many lawsuits. According to CBS news in 2009, 58% of Americans who followed the file sharing issue, considered it acceptable if a person owned a music CD and shared it with family or friends. However, the Record companies stated that they lost money because of unauthorized music sharing. There were two-third of 22 studies conclude that unauthorized music sharing has impacted the recorded music sales.


Today, there are still a few software and companies provide file sharing or file holder. As a student, I see the two popular companies are Google and Dropbox. Google has Google Drive application, and Dropbox has Dropbox application (sometimes called 'Box'). These two applications work as a file holder. A person can upload and share files to any one they want. It's very convenient for students and teachers to share lectures or turn in homework using these applications. I can also share projects with teammates so that we can free up some space for my mailbox. I have never tried to share music files via Google Drive or Dropbox so I'm not quite sure if they let us doing that or not.  

Picture from makeuseof.com


Saturday, November 2, 2013

Data Structures

Data structures and algorithms are a fundamental of computer science. For example, a red-black tree is the same whether it's implemented in Java, C++, or Python. According to some experience software engineers, you may never implement a Big-O notation in your career life. So, it's rather to understand how a binary search tree works, it will likely benefit you in the job interview, as well as your software development career. It's better to know one programming language but you have deeper knowledge in data structures and algorithms and how they work rather than knowing a few programming languages but don't know how to apply data structures or find a best solution to solve a given problem. Often time, once you know well in one programming language, you can easily pick up another language in a short time.

The software developer can apply data structures and algorithms to define and solve complex problems. Knowing the data structures exist is not enough, but knowing these details will better prepare developer to understand when it is appropriate to use one data structure over another. The most common abstractions for data collections are stacks, queues, lists, trees, maps, and hash. Lists and maps offer some valuable features, but they come with costs. And if we use the wrong one can significantly undermine the performance of the software. 

Knowledge of data structures and algorithms is an important part of software developer’s skill set because data structures and algorithms are very important when coding. To be able to develop robust, efficient and reusable code, the software developers need to know the design and analysis of efficient data structures and algorithms. They also need to know the disadvantage and advantage of the different types of data structures and algorithms. So, how do you strengthen your knowledge of data structures and algorithms? 

 
Picture from online