Wednesday, June 15, 2011

Virtualization

It has been quite a while since I last wrote a post- my work schedule is pretty crazy, and I was often falling asleep as soon as my head hit the pillow. I'm planning on my next article, but until then, here is a great entry on Virtualization from my brother Michael. He is incredibly smart and knowledgeable about computers- you could call him the expert networker! Here we go:



Virtualization (or When you ride alone, you ride with your local utility company! DUN DUN DUNNNNNNN)

Greetings readers, I am the Not-quite-so-novice Networker, you can just call me Mike. I am Caitlin's favorite younger brother (or not, I dunno she may be telling everyone she knows that I suck more than a Electrolux or somethin'). Anyway I am here to introduce in a interesting manner technologies, issues, and software that you will eventually need to get familiar with should you actually like all this computer fixing and head down the dark path full of headache, silliness, and (justified) paranoia and get a full time IT job. The key words here being "interesting" (I'm allergic to writing boring articles, just thinking about it makes me sneeze) and "introduce" (You know what else I'm allergic to? Besides Rocky Mountain Cedar and cats? Writing long complicated articles, I'll crack open the door for you, you'll have to enter it yourself, I'll provide some handy links for you so that you can learn more though).Anyway let’s get started.

Let me start with a topic that is close to my heart, and when I mean close to my heart, I mean I have been frantically studying ever since my company committed tens of thousands of dollars towards and expects me to maintain and fix (That’s one thing Entry-Level  IT workers need to have, the ability to hit the ground running on most new technologies on little notice, training is for old, important people) and that is Virtualization. So what is Virtualization you ask, well let’s start with how things generally are now.

If you follow best practices (and you should), you generally have one server for each application your network provides, whether that be providing mail to your users or web sites or file storage, it doesn’t matter if you have several servers doing the exact application (for redundancy) or having multiple servers working together on one application (which is clustering), so long as that server is doing just that application (doing otherwise would make problem much more serious as they’ll effect more than one application). However nowadays servers are pretty powerful (and expensive) and simply having them do just one thing is a waste of energy and money. So what do we do? We virtualize, that’s what!

Virtualization takes a server’s software, separates it from the hardware that it connects to, and turns it into a mobile package called a virtual machine, which can be transferred between servers, restarted, duplicated, among other things at will. A powerful server running what is called a hypervisor can run several of these virtual machines at once, splitting the server’s resources efficiently between all of them. You can consider it similar to the concept of carpooling, having 4 people drive 4 cars to 4 destinations is a waste of gas and money, having 4 people drive in one car to their destinations is far more efficient. Only in this case, should the car decide to spontaneously explode, all of the passengers can hop into the car next to them going at the same speed and continue on without a single delay.

Wanna know more about this technology? (You should!) Then check out these links:

VMware: www.vmware.com , the leader in Virtualization at the moment, their website provides all sorts of information on virtualization, as well as free software so you can give this virtualization stuff a try yourself!

Microsoft: www.microsoft.com/virtualization/en/us/default.aspx , Microsoft also is in the virtualization business, however their Hyper-V isn’t as popular as VMware’s vSphere. Still worth a look, though.

Wikipedia: http://en.wikipedia.org/wiki/Virtualization , Good place to go to learn about all the various things virtualization is used for.

Thursday, June 9, 2011

PATA vs. SATA

Computer components and devices are so heavily intertwined that it is hard to separate entries into very basic topics so that beginners can understand what I am talking about. I consider this entry 3 of 3 of what was initially just going to be one entry- HDD vs. SSD. That didn’t work out so well, but at least it gives me more stuff to write about.

This entry is about computer component interfaces. When I was first starting to learn about the inside of computers, this section took me a while to understand  but all it is just how computers components connect to one another and how information travels to each component. There are different types of interfaces that were used more in the past (SCSI, Firewire, Centronics), but the two I will be focusing on are PATA and SATA.

Though it is not as popular as it once was, expect to work with PATA on older computers. PATA stands for Parallel AT Attachment, and is the name given retroactively to devices that transmit data over parallel (Wikipedia, 2011). It is just easier to assign the one name, because PATA has been called by different names over the years (IDE is the most common, but EIDE, ATA, ATAPI exist out there too. It’s all PATA now). PATA transmits data 8 bits over 8 transmittal wires (one bit per wire= 1 byte) through a 40 pin ribbon cable that can connect up to two drives inside the computer.

SATA stands for Serial Advanced Technology Attachment, and this bus interface is an improvement on PATA in many different ways. Unlike PATA, SATA can only connect to one drive, but in return it provides higher data transfer, reduced bulk (those ribbon cables are huge!), and it’s hot swappable! An example of a common use of SATA transfer is the USB flash drives that are like a mini portable hard drive. I remember those little devices becoming popular when I was in college; a godsend since I didn’t have to carry floppy disks or those giant Iomega disks around with me anymore. SATA transmits data one bit a time in a single stream. The PATA interfaces can transmit data at speeds between 5 MB per second to 133 MB per second; SATA transmits much faster, at rates between 150 MB per second to 300 MB per second (Blogulate, 2007).

So why does SATA transmit data faster than PATA? This took me a while to understand because one would think that with 8 bits going over a wire at once, that would mean information would be sending faster than in one single stream. Here are the reasons why this isn’t the case:

1.      Those 8 bits traveling the wires may not reach the destination all at once. The computer will start to slow down when it has to wait for all those bits to catch up (Blogulate, 2007).
2.       The more wires (This vs. this) there are, the higher the chance there is for disturbances (Blogulate, 2007).
3.       Those 8 bits have to convert back into one stream in order for the destination port to read them, and this adds time. Since serial is already being transmitted in one single stream, no conversion needs to happen, and time is saved (Wikipedia, 2011).

PATA and SATA are not interchangeable, so any devices will have to be one or the other, unless there is an adapter.

Blogulate. (2007, December 2). Why is Serial communication preferred over Parallel ? Retrieved June 8, 2011, from Blogulate: http://blogulate.com/content/why-is-serial-communication-preferred-over-parallel/
Wikipedia. (2011, May 31). Parallel ATA. Retrieved June 9, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Parallel_ATA
Wikipedia. (2011, June 4). Serial ATA. Retrieved June 9, 2011, from Wikipedia: http://en.wikipedia.org/wiki/SATA

Monday, June 6, 2011

Solid State Drives

The hot computer component in the news is the solid state disk drive (SSD), named for having no moving parts of the inside of its unit, unlike the fragile hard disk drive (HDD).  An example of a solid state storage device is the memory card put into a digital camera. That technology is becoming another option to use as a storage device for computers instead of a HDD.  SSDs have their advantages and disadvantages compared to HDD, and while it is popular, they may not be replacing HDDs completely any time soon.

There are many reasons why people (like me) advocate making backups of a hard drive; the biggest is because hard drives can and do fail. The hard drive is like the cerebral cortex:  it is where the memory is located. (Wikipedia, 2011). A monitor or a motherboard can be replaced without losing information, but once the hard drive is dead, hours of work, time, and money are gone in an instant. With the advent of mobile technology, it is more important than ever to have a storage device that is stable and can handle a few hard knocks. The SSD is becoming a legitimate hard drive alternative for mobile devices where a traditional hard drive would be damaged by the constant moving- and possibly dropping- of said devices.  

While HDDs have platters with a read/write head reading the tracks on the heads (read the previous entry to get a lowdown on all those parts) there are no moving parts on an SSD and it stores memory electronically (Tyson, 2011).This is also called Flash memory. There are many perks to using SSD over HDD besides its sturdiness: it weighs less, starts up faster than HDD, and magnets have no effect on it (Wikipedia, 2011). However, HDD disk drives will not be obsolete anytime soon, as even though prices are dropping, SSD is still more expensive than HDD, and the storage sizes for it aren’t anywhere near as large. This chart on Wikipedia gives a good outline on the other differences between SSD and HDD.

There is also the perception that SSDs aren’t as reliable as HDDs. In a study mentioned in this article, SSDs were returned to stores as malfunctioning at a higher rate than HDDs. Now, this article was written in December 2010, and as I am writing this entry in early June 2011, this is an eon of time for technology.  In another article from March 2011, Seagate is promoting their business line of SSDs which should be incredibly powerful and reliable and also incredibly expensive. This should give an idea on how fast technology is improved and updated: one article is about how SSDs are being returned at a high rate as malfunctioning, and another article from the same website a few months later is about how much SSDs have improved. Who knows what will happen in a year from now? Oo, now I have a blog article to write about…in a year from now.

One last thing I have to talk about is TRIM. It’s something that is mentioned a lot during SSD discussions so it’s important to know what it is. When information is deleted from a SSD, the hard drive won’t delete the singular file right away- it only deletes information in blocks. After time, the SSD slows down because it is full of files marked for deletion, arranged in the blocks. Once the blocks are full, then the SSD will start deleting the blocks, which takes a long time. However, if both the operating system and the SSD have TRIM support (and both of them need to support TRIM for this to work) then the files marked for deletion will be deleted at that time, not when the blocks of deleted items are full.(Hilton) Just think of it as the end user taking control of their hard drive.

Hilton, J. (n.d.). What is TRIM support? Retrieved June 6, 2011, from Top Ten Reviews: http://solid-state-drive-review.toptenreviews.com/what-is-trim-support.html
Tyson, J. (2011). How Flash Memory Works. Retrieved June 6, 2011, from How Stuff Works: http://electronics.howstuffworks.com/flash-memory.htm
Wikipedia. (2011, April 25). Cerebral Cortex. Retrieved June 6, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Cerebral_cortex
Wikipedia. (2011, June 4). Solid State Drive. Retrieved June 6, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Solid-state_drive

Friday, June 3, 2011

Hard Disk Drives

I like to buy external hard disk drives like I do nail polish: I just like to pick one up when I’m walking around Target aimlessly.  I’ll remind myself that I haven’t bought any for several months, and heck, I’ve been good, and I deserve it. However, unlike the nail polishes, where I find them underneath Wii games and magazines, I never forget to use my external hard drive. Like I mentioned in previous entries, a fear of losing my awesome music collection can keep me up at night, so I back up again and again, on different hard disk drives (HDD) besides the one inside my computer.

A hard disk drive is either an internal or external computer component that holds data (Wikipedia, 2011).A hard drive on a personal computer is the opposite of cloud computing: as data is on one computer, and if the disk drive fails, then that data is gone for good. Hence, create backups! Since the HDD is a fragile piece of equipment with moving parts inside of it, it is best for a desktop. It can be used in laptops, but it is not recommended since laptops can be dropped and knocked around easily. (Trust me on this- I don’t know how my laptop survived ten years with me)

The reason that HDD are so delicate is that the components inside them are sensitive. If a hard drive is opened (which shouldn’t be done unless it is defective) it looks like a little record player with several parts:

Platters- The discs inside the HDD. Each side of the platter is called a head (Torres, 2005).

Tracks- These are like grooves on a record - circular paths written on either side of the platter (Torres, 2005).

Sectors- Smaller portions of a track, which contain 512 bytes of data (Torres, 2005).

Cylinders- The number of tracks on each side of the platter. These cylinders run straight down the set of platters, so that it forms a cylinder. (Partition Manager Software, 2011)

When buying a new hard drive to be used, especially the main internal hard drive where main data storage will be, it’s important to partition and format the hard drive.

Partitioning is done first- its separating the hard drive into different section so there can be more than one operating system on it. (Tip: If installing both a Microsoft O/S and a Linux O/S on the same drive, install Microsoft first).  Partitions are considered logical drives, and they are listed on computers by a letter.  On a Windows based system, the main logical drive is the C drive (Docter, Dulaney, & Skandier, 2007)

Partitioning assigns placement on a hard drive; formatting allows the partition to store data in a certain way.  Older (like, DOS old) formatting versions for Windows based system are FAT16 and FAT32.  Formatting should be done with the newest format version, NTFS.  All versions allow for a file allocation table and a root directory. When saving files, this is where they are saved as its name, followed by a period and the extension (ex: name.docx).  NTFS is backwards compatible with the FAT systems, but expands on it by allowing files to be compressed and file level encryption, among other features. (Docter, Dulaney, & Skandier, 2007)

Next time I’ll talk about the hot new internal storage device thats all up in the news: the solid state hard drive.

Docter, Q., Dulaney, E., & Skandier, T. (2007). CompTia A+ Complete. Indianapolis, Indiana: Wiley Publishing, Inc .
Partition Manager Software. (2011). What is disk formatting? Retrieved June 3, 2011, from Partition Manager Software: http://www.partition-magic-manager.com/partition-magic/partition-magic-help/WhatIsDiskFormatting.php
Torres, G. (2005, August 4). Anatomy of a Hard Disk Drive. Retrieved June 3, 2011, from Hardware Secrets: http://www.partition-magic-manager.com/partition-magic/partition-magic-help/WhatIsDiskFormatting.php
Wikipedia. (2011, June 2). Hard Disk Drive. Retrieved June 3, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Hard_disk_drive

Wednesday, June 1, 2011

Tech IPOs- Past and Present

I have a LinkedIn account. It is a professional version of Facebook where you can list a resume, give and receive recommendations to colleagues, join groups, and it is often the first place that companies and recruiters go to view potential new employees. LinkedIn has garnered a lot of attention in the news recently for its initial public offering- commonly referred to as an IPO- and there are concerns that its arrival to public trading is will bring back memories of the Dot.com burst that happened in the early 2000s.

 It isn’t just LinkedIn that is incredibly popular with investors right now- many technical startups including Groupon, Zynga, and Facebook are currently considering whether to offer their own IPOs, and underwriters are willing to do business with them.  Investment bankers are on the lookout for the next big innovative tech company to issue an IPO with, and excitement is high for these types of social networking websites (Tam, 2011).

In my research for this entry, there is some similarity between the rise of the Dot.coms and the rise of social media in terms of sheer popularity and willingness of banks willing to invest in these companies. In the early nineties, if a company wanted to become more popular, they just added an “e” to the front of their product or name. In 2000, seventeen different Dot.coms had commercials during the Super bowl (Timelines, 2011). Now most of these companies are in the history books for their spectacular crash and burn.  This link from Timeline gives a fascinating timeline of when the first Dot.coms started arriving until the bubble burst and the aftermath of it all. Today social media networking websites such as Facebook has become a central hub for people to communicate and plan events with.  500 million people have Facebook accounts, which I imagine is more than the number people who had Geocities webpages (oh, how I miss my page I created in 1996).

This brings up two issues that I’m going to address: what an IPO is and if there is a difference between the startup of today compared to the ones of ten years ago.

For companies past and present that wanted into the stock market, they have get funding by raising money through an underwriter from an investment bank. Goldman Sachs and Morgan Stanley are two examples of underwriters.  These underwriters drum up interest in the company by going to large scale investors to finance the startup so enough money is raised to issue stock. Once the underwriters figure out who will invest, the market conditions, and any other important information, the startup and underwriter determine the initial stock price (Investopedia, 2011). This entry from Investopedia gives an in-depth look at the process of issuing an IPO.

This process has been around for a long time but the term “IPO” didn’t come into the public lexicon until Dot.coms were planning and issuing their IPOs. The novelty and popularity of these companies caused a lot of people in charge- underwriters, the creators of the Dot.coms, investors- to overlook bad business plans (Wikipedia, 2011). There were startups of this era that became successful – Amazon and Google- but these were the exception because they had good business plans that accounted for the fact they wouldn’t have revenue profit (edited 6-2-11)for the first several years of business.  The rest of the companies blew through millions of dollars in months and went bankrupt (Wikipedia, 2011).

So now that LinkedIn has had their IPO, it “closed at $94.25, more than 109% above the $45 IPO price” (Baldwin & Selyukh, 2011),interest in underwriting is incredibly high for other social media sites and people are understandably worried that companies will be overvalued, and history will repeat itself with money and jobs lost to bad planning (Noguchi, 2011).

Fortunately, there are some positive differences between the internet startups of today- LinkedIn has been around for eight years and unlike the ones of ten years ago, does not have to spend millions to attract users because they already have them. Technology has also made strides in the decade- hardware is cheaper and faster than it has ever been. Ten years ago computers users were using the incredibly slow dial up to access the internet- now those users are a minority since most people are using cable modems and DSL lines (Noguchi, 2011).

While it is good to be cautious, especially when it comes to hundreds of jobs and millions of dollars, it seems that some hard lessons have been learned and people are excited to invest in innovative tech startups again. 

Baldwin, C., & Selyukh, A. (2011, May 19). LinkedIn share price more than doubles in NYSE debut. Retrieved June 1, 2011, from Reuters: http://www.reuters.com/article/2011/05/19/us-linkedin-ipo-risks-idUSTRE74H0TL20110519
Investopedia. (2011). IPO Basics: Introduction. Retrieved June 1, 2011, from Investopedia: http://www.investopedia.com/university/ipo/default.asp
Noguchi, Y. (2011, May 26). In LinkedIn IPO, Hints Of Another Tech Bubble? Retrieved June 1, 2011, from NPR: http://www.npr.org/2011/05/26/136655334/in-linkedin-ipo-hints-of-another-tech-bubble
Tam, P.-W. (2011, May 31). Echoing Around Tech Confab: 'Call Me'. Retrieved June 1, 2011, from Wall Street Journal: http://online.wsj.com/article/SB10001424052702303654804576349482665455162.html?mod=WSJ_Tech_LEFTTopNews
Timelines. (2011). Dot-Com Bubble. Retrieved June 1, 2011, from Timelines: http://timelines.com/topics/dot-com-bubble
Wikipedia. (2011, May 30). Dot-com bubble. Retrieved June 1, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Dot.com_bust

Monday, May 30, 2011

I, Robot

First thing I should mention before writing this article about Lingodroids is that I have never seen anything science fiction in my life. Star Wars, I, Robot, Terminator- I know that these movies exist, but I haven’t actually seen them. The closest I have come to enjoying science fiction are Futurama cartoons, and even then some of the references whoosh straight above my head.  So any evil implications from the project I am about to describe will be completely lost on me, as I think all robots are great to have a drink with and push the planet a few inches out of orbit when the sun’s rays get too close to it.


I mentioned in previous articles that computers have their own language. There is no thought or emotion behind their language; it is just a series of protocols that the computer follows to get information from one place to another. A robot is a form of computer (Neoaikon), but the way it understands information is different from how the computers we use every day understand information, as it has the ability to perceive and sense data from its surroundings (Schulz, Glover, Wyeth, & Wiles, 2011).


Now, that sounds both pretty awesome that a machine without a brain can learn, but apparently that can also be a bad thing if one has seen the Terminator. Luckily for me, I have not so I can appreciate a recent study on robots – called Lingodroids- that has come out. There shouldn’t be any fear of robots because while they can learn, they cannot think exactly like human beings- their brains aren’t as flexible and can’t comprehend such complex human thoughts as culture and society (Schulz, Glover, Wyeth, & Wiles, 2011). But while robots do not have this “artificial intelligence” it isn’t because people aren’t trying to give it to them.


A project called the Lingodroid Project is allowing robots to develop their own language by allowing two robots to communicate with each other (Ackerman, 2011). Having more than one robot is important for this study because they will establish a language by looking at a random object and agreeing with each other on a made up word to call it. The Lingodroids learn their language through hundreds of games created by the scientists that determine what is in their location (Ackerman, 2011). Through the games, the robots will create a map of the area they are in. The more games they play, the more sophisticated their language becomes. It evolves from directions and point of references to how long it takes to get from one point to another and even stories about the objects in the location (Schulz, Glover, Wyeth, & Wiles, 2011).


While this is a huge achievement for the scientists, they have bigger dreams for their Lingodroids. The scientists want the Lingodroids to develop their own grammar (thereby becoming smarter than a portion of Facebook users-ZING!) and have their language alter their behavior (Schulz, Glover, Wyeth, & Wiles, 2011). By developing a robot brain that can mimic a human, maybe these robots can bring these science fiction movies to life. Let’s hope that these robots are less Terminator and more Wall-E (which I also haven’t seen, but I heard he’s cute).


Ackerman, E. (2011, May 23). Robots invent their own spoken language. Retrieved May 30, 2011, from MSN: http://www.msnbc.msn.com/id/43143802/ns/technology_and_science-science/


Neoaikon. (n.d.). Is a robot like a computer? Retrieved May 30, 2011, from Answers.com: http://wiki.answers.com/Q/Is_a_robot_like_a_computer


Schulz, R., Glover, A., Wyeth, G., & Wiles, J. (2011, May). Robots, Communication, and Language: An Overview of the Lingodroid Project. Retrieved May 30, 2011, from Austrailian Robots and Automation Association: http://www.araa.asn.au/acra/acra2010/papers/pap163s1-file1.pdf

Thursday, May 26, 2011

A solution for too many computers- Wireless routers!

My dad has many computers in his house- I wouldn’t be surprised if he still had the first computer the family bought in 1990. In a world where computers become obsolete in two years, he has computers that are old enough to rent cars.

All of these computers are still being used and they need to be connected to the internet. Since dad’s computers are all over the place, the best way to connect them is through a wireless router.

Wireless routers use radio frequencies to transmit signals to computers; then there is no need to have Cat 5 cables all over the house hooked from computer to router. The client computers that are receiving the signal either have a network adapter installed(either internally or externally) or built in, such as in netbooks.

There are standards for the different strengths and speeds of the different types of wireless signals. Here is a chart that details the differences between the standards:

Standard Name
Frequency
Maximum Speed
802.11a
5 GHz
54 Megabits per second
802.11b
2.4 GHz
11 Megabits per second
802.11g
2.4 GHz
54 Megabits per second
802.11n
5 GHz
100 Megabits per second



Since my dad’s computers were slow and running into interference from the cordless phones he has all over the house (cordless phones are in the 2.4 GHz band and can make wireless internet connection in both the 802.11b and 802.11g bands weaken), he decided to upgrade his current router to the Linksys N Dual Band router- it’s a dual band router that can run internet connections on both the 2.4 GHz band and the 5 GHz band. He asked me to help set up the router with him. Setting up new routers isn’t difficult, but it’s also not simply just plug and play, so I will go over the steps here in order to give an idea on how it is done.  We did this using an XP operating system.

1.       The first step is to unplug in the old router and plug in the new one. Once it is set, the router should start blinking.

2.       Go to the Control Panel and click on Network Connections. The different internet connection icons should appear, including one with an icon that says that it is the LAN connection and that it is disabled. Click on it to enable it.

3.       If a firewall pops up, depending on the firewall, choose the option that indicates a “Trusted Zone” instead of an “Internet Zone”.

4.       If using DSL (as opposed to a cable modem) then at this point a logon and password is needed. The ISP providing the DSL service should have provided it when signing up.

5.       Open a web browser and type in the IP address. Generally, it should be 192.168.1.1. A pop up asking for a user ID and password will come up at his point; this is from the router. This should also be provided in the router packaging, but if it is lost try www.routerpasswords.com. Just put in the make of the router and it should provide a default user id/password. Make sure to change this as soon as possible.

6.       Once that is typed in, the router’s manufacturer webpage will appear. Since we used Linksys, the Cisco page came up and there were several tabs to select to configure the router

Some of the configuration options include naming each band of the frequency with an SSID, choosing the right wireless security (WPA2 is currently the most secure option out there).  Go ahead and check the other tabs to determine how the router should work, and save the settings.





Open up a new browser and type in a new link. If the router is working like it should be, the browser will work and the internet is working again. If there are issues, check to make sure that all the cables are plugged in (yes, this is a cliché, but it did happen to us early on during this router installation) and the directions were followed, including the manufacturer directions. If that doesn’t work, technical support from the manufacturer or technical message boards should be another option for help.



Tomsho, G., Tittel, E., & Johnson, D. (2007). Guide to Networking Essentials. Boston: Thomson Course Technology.

Monday, May 23, 2011

The OSI Model

I have made several attempts to write a one page summary of the OSI reference model, the model used to teach those who are new to networking how computers use the internet to communicate with one another. I keep failing at this task. The problem with writing an easy to read summary on the OSI model is that the way that computers talk to one another is complicated and abstract. Thank goodness that these machines aren’t sentient!

I was able to illustrate how the model works by charting out each step from the human input to how the computer would process each step. This chart is VERY simplified, but I feel it gives a basic idea of what happens behind the scenes when we want to access anything on the internet.



Here is the link to the full size PDF document


The red side of the chart is what happens when you begin with the application layer and move down the model, the blue side is what happens when the data moves up the model, and the black part is the physical layer that separates the red and blue sections. The underlined words are what protocol data units (type of computer information) are called in each layer. Something that I didn’t mention in the chart but it’s good to keep in mind is that when the data is moving down the model, each layer adds its own header with an address on it, and this is called encapsulation. When the data moves up the model, each layer removes the header that comes with their PDU and that is called decapsulation.

Koziero, C. M. (2005, September 2005). Understanding The OSI Reference Model: An Analogy. Retrieved May 23, 2011, from The TCP/IP Guide: http://www.tcpipguide.com/free/t_UnderstandingTheOSIReferenceModelAnAnalogy.htm
Tomsho, G., Tittel, E., & Johnson, D. (2007). Guide to Networking Essentials. Boston: Thomson Course Technology.

Friday, May 20, 2011

A look into the A+ exam

If you are someone who is interested in learning more about computers and want to make a career out of working with them, the CompTia A+ certification is a very good start. The A+ examination covers the foundation of working with computers, including the parts inside to understanding and troubleshooting the operating system.  I currently hold the certification for the 2006 version of the exam; that version has since been retired and now the vender-neutral organization is on the 2009 version.  

While the exam is a good choice for people who want to learn about computers and receive their first certification, if you are like me when I first started studying A+, it isn’t easy. In fact, it was one of the reasons that inspired me to create this blog. While everything I learned was essential, reading about it is incredibly DRY. Personal computer components, troubleshooting all the issues, understand the difference between the BIOS and CMOS; learning this stuff is not as fun as reading an XKCD cartoon. I put in many months studying for the exam and after receiving my certification I wanted to help people who might be in the same situation as me by making studying about computers easier to understand.

I recommend to not just study the exam, but to use practical hands-on application. A motherboard makes a lot more sense when you build your own computer. Later entries will detail my journey of building my first computer, and me installing a wireless router.  Building my own computer helped me appreciate the work my computer puts into running Sims 2, or letting me explain to people on the internet why they are wrong.

Here is the link to CompTia’s page on the A+ certification. You can see you have to pass two exams in order to receive your certificate.  220-701 is the Essentials exams, in which you must pass with a score of 675 or above; 220-701 is Practical Application and you need a 700 to pass that score. For my exam, CompTia organized the question into categories. When you finish your exam, you won’t know which question you missed, only the category that the missed question was in.

Starting in 2011, people taking the exam will have to retake it every 3 years in order to keep their A+ certification. While most organizations that issue certifications already have this rule in place, this is new for CompTia. Anyone who has taken the exam before January 2011 is essentially grandfathered in for permanent certification; however, I recommend keeping skills up to date because computer technology changes rapidly. I took the exam in 2009 and even the information that was in my guides published in 2006 was obsolete.  I am currently studying for the 2009 version of the exam, and I will post my progress along with flash cards, cram sheets, and anything else I can think of to help out.

I used a few sources while studying for my A+ exam. I used this book, which was helpful, and even better was Professor Messer. If reading books until you can’t keep your eyes open doesn’t sound as thrilling as it sounds, Professor Messer’s free videos are a nice change of pace. The videos are broken into small pieces, and he is an excellent teacher. Go check him out!

Thursday, May 19, 2011

I always feel like somebody's watching me

Hackers stealing personal information from big companies are a getting a lot of press lately and it seems that they are getting more sophisticated and effective at getting customer information and personal details. In early April, Epsilon, a marketing service firm that sends promotional emails for companies like Walgreens, Best Buy, TiVo, and more was hacked, but fortunately the hackers didn’t obtain any customer information more personal than email addresses (Associated Press, 2011). It may mean more spam, but as long as the spam isn’t responded to, hackers shouldn’t discover any more details more personal than that.

While I have companies like Walgreens send me emails (the company sends me money saving coupons!) when I heard about this breach, I wasn’t too concerned. The affected companies seemed to acknowledge the leak quickly and were doing their own research as to what happened.  I never send out personal information through email, and I feel pretty secure that all the companies involved handled this as best they could.

The more alarming attack came less than two weeks later to the Sony PlayStation Network. Personal details, including credit card information, were stolen from 77 million accounts (although not all 77 million accounts had credit card information), making this the largest security breach in history (Wikipedia, 2011).That is pretty scary, to put it mildly. Unlike the companies mentioned above, Sony took down the PlayStation Network, but didn’t acknowledge to their customers that they did in fact take down the network for several days (Ogg, 2011). The company didn’t even mention that they were exploited in an attack until a few days after that (Ogg, 2011) . The continual issues with getting the PlayStation network back up and the delay in announcing just what the heck happened is not making Sony any new friends.

My previous posts talked about the use of servers in computing. Gaining access to the servers was how this hacker reached PSN customer information and made the whole process look just like an online purchase (Ogg, 2011)

These two stories show that hackers are only going to get better at taking advantage of any technological weakness in a company. The good guys are trying to keep up but the onus is on the individual to keep his identity safe. Here is a link for any PlayStation gamers who need ideas on how to still play online but keep their identity safe.

Here is a link from Epsilon on how to avid getting scammed in phishing emails. I treat my email like my cell phone- if I don’t recognize who the incoming message is from, I don’t respond. When I have to go to a website, I type the link in my browser, and I don’t click on links in any suspicious emails.

If phishing is the least of your concerns, here is a link from OnGuard Online on how to reduce online identity theft. Always keep an eye on financial statements, credit reports, and don’t use the same password for different accounts.

Associated Press. (2011, April 4). Best Buy, TiVo, Walgreens Hacked Over the Weekend. Retrieved May 19, 2011, from Billboard: http://www.billboard.biz/bbbiz/industry/digital-and-mobile/best-buy-tivo-walgreens-hacked-over-the-1005109762.story
Ogg, E. (2011, May 3). The PlayStation Network breach (FAQ). Retrieved May 19, 2011, from CNET: http://news.cnet.com/8301-31021_3-20058950-260.html?tag=mncol;txt
Wikipedia. (2011, May 18). PlayStation Network outage. Retrieved May 19, 2011, from Wikipedia: http://en.wikipedia.org/wiki/PlayStation_Network_outage#Unencrypted_personal_details

Tuesday, May 17, 2011

A CueCat, but even better!

Up until a few months ago, I had a clamshell cellphone. I had no idea that my phone was out of fashion until I was hanging out with my friends at a bar and I was the only person without a smartphone. My phone could take blurry pictures, and I could text with the best of them (as my phone bills would show), but my little Nokia couldn’t compete with the resplendent HTC Brilliant and IPhones that could tell me what song was playing in the bar, had 12 megapixel cameras that were better than my actual digital camera, mp3 players, and a host of apps that could drain my wallet and my time so easily.

Well, after coveting the HTC Brilliant for months, I can tell you that I still don’t own any kind of smartphone whatsoever. What can I say, those data plans are expensive! I did upgrade my phone to the LG Sentio, which is pretty nice and even has a touch screen.

One feature that smartphones can utilize that my non clamshell phone cannot is the use of the Quick Response, or QR codes.  This is a QR code, which first made an appearance in Japan and is slowly making its way to North America. Using QR codes requires downloading a mobile app (a downloadable application to run programs on a phone) that can take a picture of the QR code, read it, and send the user to the webpage that is encoded in the QR code (Wikipedia, 2011).

So what do people use QR codes for? Almost anything but it’s mainly used for marketing purposes in North America. I stumbled across this article where a small British Columbian café company takes advantage of QR codes by putting ads in trains with the code inside, and when passengers scan the code in with their camera, they can order their coffee on the website that the QR code takes them too. By the time they get off the train and into the café, their coffee is ready for them, and the passengers can run off to their next destination fully caffeinated.  There are many other examples of QR codes being used by companies to market their wares, including being plastered onto billboards or being shown in live cover versions of songs originally performed by 13 year old girls.

While growing in popularity, QR codes are in the middle of their own version of Betamax vs. VHS fight. Actually, a better comparison would be HD vs. Blu-Ray vs. neither (i.e. online streaming like Hulu or Netflix). There are other versions of codes that perform similarly to QR codes but are either cheaper or a better fit with the company using them (Glazer, 2011). Alternately, QR codes and their ilk may be gone in a few years, a fad that people of the future will consider this decade’s Pet Rock. Thus, predicting a winner in this fight between the codes may be pointless.

For the time being, QR codes should make a bigger dent in the North American collective consciousness as more companies make use of the marketing abilities that it provides. Its popularity may fade in a few years, but if I got a smart phone before then, I’d like to show off to my friends how I got my drink right when we walk in the door because I was aware of this nifty app. It’s the little victories in life.

Glazer, E. (2011, May 16). Target: Customers on the Go. Retrieved May 17, 2011, from Wall Street Journal: http://online.wsj.com/article/SB10001424052748704132204576285631212564952.html?mod=WSJ_Tech_LEFTTopNews
Wikipedia. (2011, May 16). QR Codes. Retrieved May 17, 2011, from Wikipedia: http://en.wikipedia.org/wiki/QR_code


Monday, May 16, 2011

A server is my best friend!

When thinking of topics to write about, I look at my previous entries and think to myself, “Oh, I probably should have talked about that first before I wrote about this.” I know this is only my fourth entry, however, the more I research for networking topics to write about, the more I realize how in depth the field is. My cloud computing entry was only about a page long, but it’s the subject of many books and websites, and it can be overwhelming for the beginner.  I just like get to the bare bones of a topic, and I can always write another entry later to elaborate on something that can use some more discussion. While rereading my topic on cloud computing, I realized I only touched on the server aspect of it, and I thought I would take this time to go over just what the purpose of servers are.

This disambiguation page on Wikipedia lists all the different types of servers that exist (along with some non-computer definitions of server). I will just touch on a few of them for the sake of clarification and brevity.

The server that most internet users are most likely familiar with is the web server. When someone visits a website such as www.televisionwithoutpity.com or www.avclub.com (guess which two sites I’ve been spending a lot of time on) those pages that make up the website are stored on a web server, which is viewed on a web browser. The browser and web server communicate with each other using the protocol TCP/IP which stands for Transmission Control Protocol/Internet Protocol. (Tomsho, Tittel, & Johnson, 2007) TCP/IP is an entire topic of its own and in fact, I had an entire class on just TCP/IP, but for now just understand that it is a communication protocol that is used between internet-capable devices and the internet. (Tomsho, Tittel, & Johnson, 2007) Web servers need to be on all the time in order for requests for the pages to be received, but the actual website itself doesn’t take up much web server space since it is just made up of code.

There are other servers for home and office use, like a printer server, file server, and home server that allow multiple computers in a network to use one device, so there is only need for one printer, or one music library that can be accessed by all the computers. (Wikipedia, 2011)

The last server I’ll touch on is the DNS server.  Computers use a binary language that is made of 0s and 1s. (Wikipedia, 2011) However, all the 0s and 1s that computers use make it very difficult for human eyes to read. When someone wants to visit the Google website, he just types in www.google.com, and the website pops up. What happens behind the scenes is that through TCP/IP, computers are communicating with the DNS server. The DNS server acts as a translator between human and computer, turning words into a binary code called an IP address that the computer uses take the person to the website they want to visit. (Wikipedia, 2011)

There are other types of servers with specific jobs, but what they have in common is that they are devices in a central location that provide a service to multiple internet capable devices in different locations. This is the idea behind cloud computing. Cloud computing takes many, many, MANY servers and provides a central place for even more users can take advantage of the benefits all these servers provide, including ones mentioned in my previous entries.


Tomsho, G., Tittel, E., & Johnson, D. (2007). Guide to Networking Essentials. Boston: Thomson Course Technology.
Wikipedia. (2011, 05 14). Domain Name System. Retrieved May 16, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Domain_Name_System
Wikipedia. (2011, April 7). File Server. Retrieved May 16, 2011, from Wikipedia: http://en.wikipedia.org/wiki/File_server

Saturday, May 14, 2011

Cloud computing isn’t for the birds

In my first entry here, I mentioned cloud computing as a way to always have your music on you, as long as you have a device capable of accessing the internet. Cloud computing isn’t just for music, it’s for anything that can be done with computers but requires less space on hard drives and less physical work for it to run. It’s becoming so popular that businesses are starting to use it (If you ever worked for a non-technical based company like me you know that companies are SLOW to adopt new technologies).


So what is cloud computing and how does it work?

Cloud computing is using software and applications stored on the host’s servers, or ‘the cloud’, instead of having those programs stored on an end user’s hard drive. The end user uses an interface that accesses the host’s servers, and will have to access those servers every time he wants to use the program.

The most recent- and extreme- example of cloud computing is the debut of Google’s Chromebook. This device looks like a laptop, but according to the company, “will have no programs and no desktop, require no installations and rely completely on the Web for all of its functions.” Everything that will be done on the Chromebook will be done through Google’s servers.

There isn’t a need to buy a new device to cloud compute. Web-based email services such as Hotmail are considered cloud computing- emails aren’t stored on a computer’s hard drive, but on the company’s servers.  Another example is a service called Dropbox. This service is used by companies to save storage space, and allows employees to share and edit documents with each other. Instead of having different versions of the same document floating around on a company’s hard drive, the document is uploaded once, and members of the company can view and edit that document. Dropbox even has a feature where you can see the document’s editing history, so its possible to see just who is editing what.

There has to be an element of trust between the end user and the company if information and hard work is going to be uploaded to someone else’s server and not on one’s own computer. As someone who makes backups of backups, I am not at the point where I feel I can have one copy of my work, and house it on another server. For example, for this blog, I first write it down in my blog-notebook, or “blogbook.” Then I type it and save it in my text editor, and then I upload it to Blogger. In the back of my head, I know it isn’t in Google’s best interest to lose my stuff, but I’m glad I have several copies, even if the written copy is only decipherable by me, and that’s if I’m lucky.

As the amount of the world’s information grows, it becomes more and more expensive for one person or business to store all of it, so cloud computing’s presence is only going to grow. Companies that offer cloud computing services are going to work incredibly hard to earn the public’s trust so the risk of data loss should be at a minimal. I would suggest becoming more familiar with what cloud computing offers, but keep an eye on your work and the company’s reputation.

Bibliography

Strickland, J. (n.d.). How Cloud Computing Works. Retrieved May 13, 2011, from How Stuff Works: http://computer.howstuffworks.com/cloud-computing1.htm

Wednesday, May 11, 2011

1st post!!!!

The first thing that pops into my mind when I want to procrastinate is that I have to listen to my music. Right now I am obsessed with Lady Gaga’s new song “Judas” and cannot stop listening to it over and over, which I bet my neighbors love.

It was while I was procrastinating cleaning my apartment that I stumbled upon an article about Google and its new service called “Music Beta.” It appears to be a more accessible ITunes, where instead of just having your music stored on your computer, you store it in the Google cloud and can access your music where ever you have an internet connection, including Android devices. Click here to read the article which will elaborate on the details and special features of Music Beta.

When it comes to up and coming gadgets and services, I tend to fall into the Rogers (1)  category of “Late Majority,” especially compared to the early adopter category that my dad, brother, and other IT folk I know are a part of. While I am getting excited about my brand new (ah-hem, refurbished) Acer netbook, I have a bunch of people showing off their brand new IPad and all the nifty tricks it can do. I want to get in the ground floor of this particular service, although since I buy my music of ITunes, I will most likely use Apple's version when it comes out  instead of Google’s. I love the idea of having my music with me everywhere I go, so I can start parties with my awesome taste in music.

The other pro of cloud music storage is that it provides another avenue of backup. Backing up information will be a running theme in my blog entries, not only because it is a smart thing to do, but because my networking textbooks and A+ certification guides drilled that in my skull, so I repeat that mantra as often as I breathe. Back up right now!

Speaking of A+ certification, my next entry will be about the exam itself and some good resources to study. I like the A+ as a first certificate to get as it tests on the basics of computers and networking.  Before I started studying for it, I couldn’t tell you the difference between RAM and ROM.  Now that I passed, I proudly show off my A+ ID card whenever there is a lull in the conversation.  (BTW, RAM is Random Access Memory that is cheap and fast memory that is accessed by the central CPU, while ROM is Read Only Memory, that cannot be erased;  an example of it is when it is used to boot the operating system. I will elaborate more on these topics at a later date)


1. Wikipedia. (2011, April 29). Diffusion of Innovations. Retrieved May 11, 2011, from Wikipedia: http://en.wikipedia.org/wiki/Diffusion_of_Innovations#Adopter_categories