Archive for the ‘IT Security’ Category

Kerberos – The Three Headed Dog

Posted: February 1, 2012 in Kerberos

What is Kerberos?

Kerberos is an ancient name STRONGEST THREE HEADED WATCH DOG.

Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography.

A free implementation of this protocol is available from the Massachusetts Institute of Technology. Kerberos is available in many commercial products as well.

The Internet is an insecure place. Many of the protocols used in the Internet do not provide any security. Tools to “sniff” passwords off of the network are in common use by malicious hackers. Thus, applications which send an unencrypted password over the network are extremely vulnerable. Worse yet, other client/server applications rely on the client program to be “honest” about the identity of the user who is using it. Other applications rely on the client to restrict its activities to those which it is allowed to do, with no other enforcement by the server.

Some sites attempt to use firewalls to solve their network security problems. Unfortunately, firewalls assume that “the bad guys” are on the outside, which is often a very bad assumption. Most of the really damaging incidents of computer crime are carried out by insiders. Firewalls also have a significant disadvantage in that they restrict how your users can use the Internet. (After all, firewalls are simply a less extreme example of the dictum that there is nothing more secure then a computer which is not connected to the network — and powered off!) In many places, these restrictions are simply unrealistic and unacceptable.

Kerberos was created by MIT as a solution to these network security problems. The Kerberos protocol uses strong cryptography so that a client can prove its identity to a server (and vice versa) across an insecure network connection. After a client and server has used Kerberos to prove their identity, they can also encrypt all of their communications to assure privacy and data integrity as they go about their business.

Kerberos is freely available from MIT, under copyright permissions very similar those used for the BSD operating system and the X Window System. MIT provides Kerberos in source form so that anyone who wishes to use it may look over the code for themselves and assure themselves that the code is trustworthy. In addition, for those who prefer to rely on a professionally supported product, Kerberos is available as a product from many different vendors.

In summary, Kerberos is a solution to your network security problems. It provides the tools of authentication and strong cryptography over the network to help you secure your information systems across your entire enterprise.

These “White Hat” security researchers are ethical hackers, whose discoveries and inventions shake things up — as they try to stay one step ahead of their underground “Black Hat” cousins . 

Let’s know some of them.

Robert Rsanke Hansen

It’s not unusual to hear someone say “Rsnake found out..” and Hansen’s manic inventiveness includes the “Slowloris” low-bandwidth denial-of-service tool, which ended up being used by anti-Iranian protesters to attack the Iranian leadership Web sites; another called “Fierce” does DNS enumeration to find non-contiguous IP space to make attacking targets easier.


Greg Hoglund
Since 1998 has been investigating rootkits and buffer overflows, founded the Rootkit Web site and also co-authored the books “Rootkits, Subverting the Windows Kernel” and “Exploiting Software.” One of his most memorable feats was exposing vulnerabilities associated with the online game World of Warcraft, detailed in a book he co-authored with security expert Gary McGraw, “Exploiting Online Games.”
Dan Kaminsky
History may remember Kaminsky as the diplomat and statesman in the “White Hat” because of his work behind the scenes with software and service providers to patch a flaw he discovered in 2008 in the DNS protocol, which if exploited, would have led to mass disruption of the Internet.Though some argued he should have immediately disclosed the flaw, others praised his discretion in quietly working to fix the problem before it was widely publicized.
Zane Lackey
This co-author of “Hacking Exposed: Web 2.0” and contributing editor to “Hacking VoIP” and “Mobile Application Security” digs into flaws in mobile and VoIP systems. In the past, some of his public talks and demos about compromising VoIP System have been so detailed that chief information security officers at major corporations said they couldn’t advocate investing in VoIP until the issues raised were addressed by vendors. 
Marc Maiffret
Once the bad boy ‘Chameleon’ in hacking group “‘Rhino9,” Maiffret luckily realized his hacking skills could be put to use in protecting Windows-based computers when, at age 17, he turned over a new leaf to co-found eEye Digital Security in 1997, working with security researchers Derek Soeder and Barnaby Jack. A demon at discovering Windows-based vulnerabilities, Maiffret also played a role in zeroing in on the infamous “Code Red ” worm in 2001, which exploded across the Internet ravaging Microsoft-based computers.
Charlie Miller
Co-author of the “Mac hacker’s handbook,” Miller has hacked Safari the last three years at the Pwn2Own contest, found an iPhone exploit that consisted entirely of SMS text messages, and was the first to hack Apple’s iPhone in 2007 and the Android phone in 2008. He also is credited with writing the first “virtual world” exploit for Second Life.
HD Moore

The open-source penetration testing platform, the Metasploit Project, founded in 2003 by Moore as chief architect, has become one of the most influential security inventions of the era with its penetration-testing and exploits used to uncover network weaknesses…by the good, the bad and the ugly.

Joanna Rutkowska
This brainy Polish researcher has made it an obsession to figure out how stealth malware, such as rootkits, can be so well hidden in software and hardware that few are ever likely to find it. Her “Blue Pill” attack against Microsoft’s Vista kernel protection mechanism, which brought a crowded room of security geeks at Black Hat to a standing ovation in 2006, was just her first revelation publicly to show how easy it is for dangerous code to hide in plain sight.
Sherri Sparks

Like Rutkowska, researcher Sparks has made rootkits and stealth malware her pursuit, and at one Black Hat Conference showed how operating system-independent rootkits, such as the proof-of-concept System Management Mode-based rootkit she built with colleague and co-founder Shawn Embleton, could be used to subvert and compromise computer networks

Joe Stewart
With expertise in tracking malware and botnets used by cyber-criminals for financial gain, Stewart is often the first to identify dangerous new code specimens and how they work, such as the elusive Clampi Trojan and how the SoBig worm was sending spam. It all gives him an unpleasantly close look at East European and Chinese cyber-gang activity.
Christopher Tarnovsky
Like a surgeon re-tooling a pulsing heart, Tarnovsky makes use of specialized tools in his lab to bypass supposedly tamper-resistant hardware circuitry in semiconductors to gain root control to tap into data. As described in a Black Hat session, he did this recently with the Infineon processor with the Trusted Platform Module used in PCs, smart cards and even Microsoft’s Xbox. Others aren’t likely to duplicate his feats. Or are they?
Dino Dia Zovi
Co-author of the “Mac Hacker’s Handbook” and “The Art of Software Security Testing,” Zovi discovered and exploited a multi-platform security vulnerability in Apple’s QuickTime for Java in one night in order to hack a fully patched MacBook Pro to win the first Pwn2Own competition. He also was the first to publicly demonstrate VM hyperjacking using Intel VT-x in a live demo at Black Hat 2006. He says he can’t discuss “the hardest things” he ever hacked since that gets into non-disclosure agreement territory.
So now you know some big guns in the field of security!! Try to know what have they done in details, it will be very informative and interesting and think how would they have done?

All of us working in IT, for us the word “BUG” is very common. That to being a test engineers we are very familiar with bug 🙂 & its severity / priority.

For Developers some time it will be night mare, but we enjoy.


Here i am going to tell you the word BUG came to IT world….!!!!!!!!!!!!!!


software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program’s source code or its design, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth.


Here is the first bug photo……


The First “Computer Bug” Moth found trapped between points at Relay # 70, Panel F, of the Mark II Aiken Relay Calculator while it was being tested at Harvard University, 9 September 1947.

The operators affixed the moth to the computer log, with the entry: “First actual case of bug being found”.

They put out the word that they had “debugged” the machine, thus introducing the term “debugging a computer program”.

 In 1988, the log, with the moth still taped by the entry, was in the Naval Surface Warfare Center Computer Museum at Dahlgren, Virginia, which erroneously dated it 9 September 1945.

The Smithsonian Institute’s National Museum of American History and other sources have the correct date of 9 September 1947 (Object ID: 1994.0191.01). The Harvard Mark II computer was not complete until the summer of 1947.



The key is for organizations to treat security vulnerabilities like software defects, aka “bugs.”

Why do these vulnerabilities escape the notice of the highly skilled developers that create the applications and websites that increasingly underpin our global economy? How can we, as security professionals, achieve secure Web development and provide the developers with the tools required to reduce the number and frequency of these types of vulnerabilities? The key is for organizations to treat security vulnerabilities like software defects, aka “bugs.”

The first step in this paradigm shift, and perhaps the hardest one, is to get development managers and security officers to agree that security vulnerabilities should be treated the same way as usability or functionality bugs. Most development managers are focused on functional defects  —  defects that prevent the software from working correctly. The challenge comes in getting them to understand that security vulnerabilities, if exploited, also cause the software to function incorrectly, resulting in not only downtime to fix the defect, but also in financial and/or reputational losses to the organization.

Security vulnerabilities are software defects, and need to be handled exactly the same way. It can be challenging to reach that point of agreement; development managers often don’t understand the potential impact security vulnerabilities have on a business, they don’t know how to identify security vulnerabilities, and they often don’t know how to remediate them even if they could find them.

In order to support developers in overcoming these challenges, security teams can increase Web application security testing during regularly scheduled vulnerability assessments on applications and websites already in production. Web security scanning tools like those offered by IBM or Vercode Inc. can be used for this type of testing. After testing is complete, developers should be charged with creating formal remediation plans for the security “bugs” that are found. By doing this, the developers become familiar with the security testing and remediation cycle.

Once developers are accustomed to the security testing and remediation cycle for software already in production, the next step is to incorporate security testing into the pre-production QA process, again using the same tools used for security assessments. At this point, security bug tickets should start to be opened in the same way other bug tickets are opened, using the same bug-tracking systems the developers already have at their disposal.

As a next step, we can further leverage the software development life cycle (SDLC) model to ensure consistency in the way Web applications are developed and tested. Even smaller businesses that produce software strictly for internal use are establishing rigid SDLCs as a means to reduce bugs. Those same processes can be leveraged effectively to locate security bugs as well. In order to leverage those established SDLC processes, developers will likely need training to use the aforementioned tools to identify security defects. Several vendors offer Web-based platforms that can easily be integrated into an SDLC, and can allow the developers to test their code at the unit level, early in the SDLC. As an added benefit, many such tools can provide remediation advice to developers at the time of detection. Finally, most of these tools can be integrated with the bug-tracking tools that developers already use, closing the loop on treating security vulnerabilities as functional defects.

By redefining security vulnerabilities as functional defects or bugs, and providing Web developers with the tools and processes they need to identify the bugs and remediate them, security stakeholders can make security an integral part of an organization’s SDLC, building security in, rather than bolting it on. And that will lead to websites and Web applications that are not only cheaper, but also more secure.

In its recent Website Security Statistics Report, WhiteHat Securityi found that during 2010, the average website had 230 vulnerabilities that could lead to a breach or loss of data. Other recent studies have shown that roughly 70% to 80% of Web applications contain significant vulnerabilitiesii. And in its Top Ten 2010 report, The Open Web Application Security Project (OWASP) again reported that 10 software risks were responsible for the vast majority of vulnerabilities in website software.

These reports are only the latest in a series stretching back many years, all identifying almost exactly the same problem: Websites and applications are still being published with security vulnerabilities that could be corrected relatively easily. Unfortunately, most of these security vulnerabilities are being found only after the applications and websites are published. A hoary old cliché about security is it’s easier (and cheaper) to build security in, rather than trying to bolt it on afterwards, through remediation.


Black hole in software

Posted: December 16, 2011 in Black hole in software





1) The term “black hole” is sometimes used to refer to an imaginary place where objects, files, or funds go when they get lost for no apparent reason.

2) In physics and astronomy, a black hole is a region in time and space within whichgravity is so strong that nothing can escape, not even electromagnetic radiation such as visible light. Black holes are thought to surround certain celestial objects.

The idea of a black hole (if not the term itself) is not new. As the intensity of the gravitational field around an object increases, so does the escape velocity. The escape velocity for a celestial mass (such as a star, planet, or moon) is the vertical speed with which an object must be hurled from the surface in order to fly forever beyond the gravitational influence of the mass. If a substantial celestial body such as a star becomes small enough in diameter, the escape velocity at the surface can theoretically exceed the velocity of light. This idea occurred to astronomers even in Isaac Newton’s time. Modern astronomers believe they have observed black holes, consisting of stars that have collapsed under their own gravitation after spending their nuclear fuel. Black holes are also believed to exist at the centers of galaxies, including our own.

A black hole produces bizarre effects on time and space. As seen from outside, an object falling into a black hole would approach the so-called event horizon, which is a spherical “one-way membrane” or “Rubicon” surrounding the black hole itself. If the object were a clock, it would seem to run more and more slowly as it approached the event horizon, and would never quite make it inside the black hole. From the reference frame of the falling object, nothing out of the ordinary would take place in the rate at which time passed, and the entry to the black hole would proceed apace, although the gravitational force near the event horizon might tear the falling object apart.

Black holes have been fodder for wild ideas and science-fiction stories since the concept became well known in the mid-1900s. Some scenarios are sensational to the point of madness. For example, suppose a tiny black hole, manufactured for use as a doomsday weapon, were dropped onto the surface of the earth? It would, as the story goes, proceed to devour the planet with unstoppable and phenomenal violence.

Blackhole list:

A blackhole list, sometimes simply referred to as a blacklist, is the publication of a group of ISP addresses known to be sources of spam, a type of e-mail more formally known as unsolicited commercial e-mail (UCE). The goal of a blackhole list is to provide a list of IP addresses that a network can use to filter out undesireable traffic. After filtering, traffic coming or going to an IP address on the list simply disappears, as if it were swallowed by an astronomical black hole. The Mail Abuse Prevention System (MAPS) Real-time Blackhole List (RBL), which has over 3000 entries, is one of the most popular blackhole lists. Begun as a personal project by Paul Vixie, it used by hundreds of servers around the world. Other popular blackhole lists include the Relay Spam Stopper and the Dialup User List.

Heuristics is the application of experience-derived knowledge to a problem and is sometimes used to describe software that screens and filters out messages likely to contain a computer virus or other undesirable content. A heuristic (pronounced hyu-RIS-tik and from the Greek “heuriskein” meaning “to discover”) is a “rule-of-thumb.” Heuristics software looks for known sources, commonly-used text phrases, and transmission or content patterns that experience has shown to be associated with e-mail containing viruses.

Because many companies or users receive a large volume of e-mail and because legitimate e-mail may also fall into the pattern, heuristics software sometimes results in many “false positives,” discouraging its use. Security experts note that, although such software needs to get better, it is a valuable and necessary tool.

 It pertains to the process of gaining knowledge or some desired result by intelligent guesswork rather than by following some preestablished formula. (Heuristic can be contrasted with algorithmic.)


The term seems to have two usages:

1) Describing an approach to learning by trying without necessarily having an organized hypothesis or way of proving that the results proved or disproved the hypothesis. That is, “seat-of-the-pants” or “trial-by-error” learning.

2) Pertaining to the use of the general knowledge gained by experience, sometimes expressed as “using a rule-of-thumb.” (However, heuristic knowledge can be applied to complex as well as simple everyday problems. Human chess players use a heuristic approach.)

As a noun, a heuristic is a specific rule-of-thumb or argument derived from experience. The application of heuristic knowledge to a problem is sometimes known as heuristics .



Spam is flooding the Internet with many copies of the same message, in an attempt to force the message on people who would not otherwise choose to receive it. Most spam is commercial advertising, often for dubious products, get-rich-quick schemes, or quasi-legal services. Spam costs the sender very little to send — most of the costs are paid for by the recipient or the carriers rather than by the sender.

There are two main types of spam, and they have different effects on Internet users. Cancellable Usenet spam is a single message sent to 20 or more Usenet newsgroups. (Through long experience, Usenet users have found that any message posted to so many newsgroups is often not relevant to most or all of them.) Usenet spam is aimed at “lurkers”, people who read newsgroups but rarely or never post and give their address away. Usenet spam robs users of the utility of the newsgroups by overwhelming them with a barrage of advertising or other irrelevant posts. Furthermore, Usenet spam subverts the ability of system administrators and owners to manage the topics they accept on their systems.

Email spam targets individual users with direct mail messages. Email spam lists are often created by scanning Usenet postings, stealing Internet mailing lists, or searching the Web for addresses. Email spams typically cost users money out-of-pocket to receive. Many people – anyone with measured phone service – read or receive their mail while the meter is running, so to speak. Spam costs them additional money. On top of that, it costs money for ISPs and online services to transmit spam, and these costs are transmitted directly to subscribers.

One particularly nasty variant of email spam is sending spam to mailing lists (public or private email discussion forums.) Because many mailing lists limit activity to their subscribers, spammers will use automated tools to subscribe to as many mailing lists as possible, so that they can grab the lists of addresses, or use the mailing list as a direct target for their attacks.


 Spam cocktail (or anti-spam cocktail):

 A spam cocktail (or anti-spam cocktail) is the use of several different technologies in combination to successfully identify and minimize spam. The use of multiple mechanisms increases the accuracy of spam identification and reduces the number offalse positives.

A spam cocktail puts each e-mail message through a series of tests that provides a numeric score showing how likely the message is to be spam. Scores are computed and the message is assigned a probability rating. For example, it may be determined that a message has 85% probability that it is spam. E-mail administrators can create rules that govern how the messages are handled based on their scores; the highest scores may be deleted, medium scores may quarantined, and lower scores may be delivered but marked with a spam warning.

A spam cocktail commonly includes several of the following identification methods, which may be weighted differently for message scoring:

  • Machine learning: Implementing sophisticated computer algorithms that improve over time to analyze the subject line and contents of a message and predict the probability that it is spam based on past results. The Bayesian filter is a type of machine learning.
  • Blacklisting: Subscribing to a blacklist or blackhole list of known spammers and blocking messages from those sources
  • Content filtering: Using programs that look for specific words or criteria in the subject line of body of a message
  • Spam signatures: Using programs that compare the patterns in new messages to patterns of known spam
  • Heuristics: Using heuristic programs that look for known sources, words or phrases, and transmission or content patterns
  • Reverse DNS lookup: Checking whether the IP address matches the domain namefrom which a message is coming.