Showing posts with label phdforum. Show all posts
Showing posts with label phdforum. Show all posts

Wednesday, November 9, 2011

GHC: PhD Forum 1: Hardware and Security

Intelligent Cache Management for Reducing Memory System Waste

Presenter: Samira M. Khan (University of Texas at San Antonio)

Caches are just not efficient, if there's a cache miss hundreds of extra cycles of delay are added. Processor performance is doubling every 18months, but memory performance is only doubling every 10 years! It just can't really keep up.

Most of microprocessor die are is cache, but they aren't efficient. Using the cache efficiently is important to improve performance and reduce power. The problem is dead blocks - not even getting used. Up to 86% of blocks in the cache are dead at any one time.

This is caused by the most recently used cache management policy, so many blocks just simply go unused. Khan's research was based around predicting which blocks were going to be dead and take advantage of them and changing the replacement policy, reducing power requirements of the system.

Usable Security and Privacy Policy Management

Presenter: Maritza L. Johnson (Columbia University)

Johnson's research is around access control and policy management. She started out with some real world examples, like how all of us are wearing Grace Hopper Conference badges, which grants us access this session.

Johnson's next slide was the Confidentiality, Integrity and Availability triangle, while she discussed the balance while talking about read write access to files, an every day problem in shared environments. To properly approach this, there needs to be a constant cycle of evaluation, analysis, and design. You can't just come up with a design and be unwilling to modify it, as needs and usage may change.

As users of Facebook, we're all access control managers, as well. Johnson and her colleagues did their research around facebook, as it's so open and available for studying.

A question the research sought to solve was Are users' Facebook privacy settings correct. This is hard to totally know what someone else's intent was, as each person has a different level of information they feel comfortable sharing.

The app they developed an application to look for potential violations between what the user intended and what they got. For example, if someone shared publicly "I'm at work. I'm just laying on these chairs until my boss..." ... should that really be public?

The research involved participants using an app that they told what type of information they wanted to share, and then it studied what happened over a period of time, and showed what it believed were violations of the policy to the users. Many of these were confirmed to be violations, yet, users still didn't want to change their privacy settings.

The ideal setting for most user is actually to just share with friends only.

Detecting Stealthy Malware Using Behavioral Features in Network Traffic

Presenter: Ting-Fang Yen (Carnegie Mellon University)

Yen started out with a great background in what a Botnet is: infected hosts with a subtle command & control system that are doing malicious activities. One single botnet has 3.6 million hosts - combined, they have more computing power than the top 500 supercomputers combined.

A botnet may have a centralized control, where all infected hosts get their commands from a central control computer, but many have peer-to-peer control.

Previous work in this area looked for a signature of a botnet to identify new infections. Similar work is done by mapping behaviour of a botnet.

Botnets are becoming more sophisticated, but our current techniques are just not keeping up.

Yen's research was around finding previously unknown bots. One way of doing this is using the research that shows that most hosts use a consistent amount of network traffic on a daily basis - if that traffic suddenly rises, or happens during odd hours, the host may be infected. Bots also use consistent payloads - so look for a lot of similar communication.

Peer-to-peer botnets tend to blend in, traffic wise, with other, normal peer-to-peer traffic. Research noticed, though, that timing of botnets packets are too regular - not being driven by a human.

This post syndicated from Thoughts on security, beer, theater and biking!

Wednesday, September 30, 2009

Grace Hopper: PhD Forum 4

Sitting in my second packed room of the Grace Hopper conference! Considering we're still before "official" launch time, I can't believe how many women are here and how packed every session is! Here in my first session in the PhD series, I'm excited to see three PhD students present their research.

An n-gram Based Approach to the Classification of Web Pages by Genre: Jane E Mason, Dalhousie University:

Mason is looking for a novel approach to doing classification of web sites by actual genre - not just keywords. For example, searching for a health condition and only showing you information pages instead of pages by drug manufacturers attempting to sell you something.

Mason chose to use n-grams, because they are relatively insensitive to spelling errors, are language independent, and relatively easy to program. She combines these and then processes them with the Keselj Distance Function, which is apparently "simple", but it has been awhile since I've been in Differential Equations :-)

Mason and her team have been looking at how to let some web pages have multiple genres, which means that some pages end up with no genre - noise! While it's easy for a human to identify a nonsense/useless web page, I think it's pretty cool to get a computer to do this for you, so you won't even see it in the search results!

Ant Colony Optimization: Theory, Algorithms and Applications: Sameena Shah, Institute of Technology Delhi:

I've never heard of this type of optimization, so this was very interesting for me. Shah chose to study this area of optimization because ants don't have centralized coordination and they make great decisions based only on local information. She sees this as a great method to apply to distributed computing. Now, how do we get computers to leave pheromones on the path of least resistance?

Other than the lack of pheromones, another problem she had to solve is that ants don't always find the shortest path - if enough ants have taken a longer path before the short path is discovered, all of the ants in the colony will use the longer path and ignore the short path. Obviously, she doesn't want that short coming in her algorithm :-)

Shah does have a slide in her presentation which shows the statistical "solution", but it's a much more complicated formula than I ever saw in my intro to statistics course at Purdue. :)

Using Layout Information to Enhance Security on the Web: Terri Oda, Carlton Univeristy:

Ms Oda is a woman after my own heart, starting her presentation with a xkcd comic :-)

She starts her talk out talking about different types of security, like secure networks between companies. Oda tells us about how the threat models are no longer obvious: those seemingly innocuous applications in facebook that have access to your private chats on the site and private emails, websites that don't properly protect passwords, and malicious users on the same forums. Her talk moved onto the types of threats she's actually trying to protect you against: cross-site scripting and previously good sites that have gone bad.

She makes an excellent point that most (all?) web pages are done by web designers (aka artists), NOT web security experts and with all their deadlines and basic functionality bugs, there is no time to even think about security. Is it any wonder we have so many attacks and vulnerabilities out there?

but how can we solve this? Schedules will never have enough padding and most people designing web sites did not receive a BS degree from Purdue (where we were told over & over again that security must be designed in from the beginning, not as an add-on)

She's looking at using heuristics to correctly identify different elements on a page so that it's visually evident which components on the page are from the site you're visiting or being served from an external site (like an ad). I can't wait to see how her research turns out, and how much she can protect the user with a simple browser add-on!