On Monday, February 1, 2010, DNS-OARC organized a few presentations at the start of the 2nd Global Annual Symposium on DNS Security, Stability and Resiliency

Presentations and Notes

13:15: Introduction to DNS-OARC Roy Arends / DNS-OARC
 
13:25: Investigating anomalous DNS traffic: A proposal for an address reputation system Sebastian Castro / .NZ Registry Services
Q&A Willingness of operators to cooperate? There's a need from the TLD operators to handle these kinds of events - to get traction, plans to address a wider audience. Building a reputation system is one thing, but using it is another. What's the risk of impacts of false positives (in social and technical domains)? Blacklists frequently don't use such complicated schemes - haven to considered highly strict rules in terms of how identities are added. But this will require more thought as experience grows. Once you have a blacklist, how would you envisage its use? Blocking services? Other? Filter at the edge? Haven't thought fully about this - future study topic. Comment: During the event, several AS's were shown. This kind of data shows that a larger-scale study might be needed. So, before we get to "how does the blacklist work?", it might be good to do a larger study. Comment: There needs to be some back-pressure to deal with trying to get the source to pull their weight to resolve the problem (spammers, for instance). Otherwise, the blacklist's size is going to monotonically increase. Experientially, it is harder to work the problem back to the source. Comment: As an operator, this data (or a blacklist) by itself is not particularly interesting, however, in combination with other data -- through a kind of intelligence fusion of data -- we can see some real useful trends. In other words, it's the combining of the data that is interesting, in terms of characterizing the curent situation. Comment: How far down the chain of "operators" do you go to collect (and analyze) this data? Different players have different ideas about what constitutes an anomaly, or what constitutes a threat. Avoiding subjective language, such as good or bad, might be helpful. Comment: One idea put forth was that ADSL customers who are characterized as 'bad' should be blocked from using (particular country's) network resolution services, because it could lead to an attack. (if somoene can clarify this comment, please chime in.) Comment: Seeing an anomaly might not necessarily indicate that a problem is existential in the indicated domain. Instead, it is possible that you are measuring a problem in your own infrastructure. How much resource (time & money) did this cost .NZ to do this investigation? And how are we going to see sharing of this data? Comment: Smaller organizations are not going to have the resources to be able to focus on all this data. Two days of work. Tools used were largely already done (and known to the principal investigator).
14:01: APNIC DNS Measurement & Perspectives on 'DNS Health' George Michaelson / APNIC
Interesting observation that, with DNSSEC enabled, we can see average UDP packet size increasing, including a fair body of packets with a size in excess of 800 bytes, and that this could have a significant impact in the face of network designer assumptions that valid UDP traffic should nominally not include traffic with packet sizes exceeding 512 bytes. Q&A If we divide our measurement data into two categories: one caused by human behavior and one by machines, can we explain the diurnal pattern shown in the NXDOMAIN from DSC slide? Yes, it seems natural that there would be two different categories, and your observation is likely correct. However, there is still work to perform to characterize the collected data in a meaningful way. Comment: Talked about DSC and that it is doing some things and should be doing others. DNS-OARC was started as an NSF grant. A problem was "how to spend $ in a meaningful, useful way". Now gathering requirements for next generation work. I want NSTAT - a place where data can be aggregated en masse. Now a framework seems achievable. (Next need is funding.) Admin note: Break moved to 15:00 and shortened to 1/4-hour
14:41: Measurement for ascertaining health of the DNS James Galvin / Afilias
Note-taker comment: Taking longer notes on this talk, because the slides are highly summarized. When we collect data in this kind of context, we have to think about massive amounts of data, the size of which is going to grow continually. Are we going to collect measurements in the raw and keep that? Or are we going to create summaries and then aggregate the summaries? The next question is, how do we collect the data in a sampling? Do we start looking outside our own infrastructure (at the entry point, for instance) or are we going to look inside our infrastructure. There are arguments for and against each possibility, largely because sampling at one point and not sampling at another will gain or lose important statistically important data. This question is deserving of additional study. We need to think about things like creating a technical advisory board of people who are knowledgable about the subject of analyzing the data so that we can make sense of the information that is collected. This analysis needs to take into account ideas from a wide variety of points of view. What will the introduction of DNSSEC do to the collection and analysis of data? Will is create a new vector to analyze? Or will it accentuate the negative aspects of existing data analysis? w.r.t. DNSSEC widespread implementation: As the amount of data being moved increases, and as we see more signed zones being transferred, then we have to think about whether instantaneous propagation is the right model. Last point, for discussion, is DNS views ("views" is inteded to have a generic usage, with apologies to BIND). Hypothesis is that views and filtering are going to become mainstream -- and perhaps even mandated in some jurisdictions. In sum, entire zones will not be delivered. Filtering will be required in some circumstances. Q&A No questions. Admin note: Break until 15:15.
15:18: Characterizing DNS Client Behavior using Hierarchical Aggregate Keisuke Ishibashi / NTT
Q&A (Referring to slide 11, "Experimental results":) You claim that your methodology increases accuracy by 10-20%. What's your ground truth to be able to make such an assertion? Investigator made criteria but this was rough guidance only. Is the mathematics intended to achieve on-the-fly results? Second question, this seems simple and cheap to calculate, but it might lead to misclassification rates that are high. What is the intent of the use of the result of the calculation? Comments? Yes, it is an easy calculation and a valid comment. (nfi)
15:49: JPRS activities on monitoring and measurement of JP DNS and the registry system Shinta Sato, JPRS
Q&A Do you develop these criteria for yourself and then discuss it in the community? Is there a reflection of others' needs in these criteria? We haven't asked external communities -- these are very internal thoughts, which we have not opened up to the public. What drove those numbers (you picked 15m, 1h) -- is there some goal you're aiming at? These values were set merely from our own thoughts, not based on some particular objective standard. These values would need to be revisited from time to time to ensure validity. You chose 50% change in the size of the zone. Do you also cb3r3seck to see the # of changes to the zone? No, we only check the file size of the zone. We don't account to changes in the resource records. In Japan, when you're referring to [medical] health, there's "public health" and "private health". We are seeing that your zone files are handled exclusively within your domain? Or do you allow transfers to areas outside your own domain (where you are not fully in control of the health of the environment)? We don't transfer our entire dataset outside. We've heard about how to become healthy or how to stay healthy, but it didn't really address what to do when we've become sick. When the zone file changes are too large, we keep using the zone file and alert to the operators to see what is wrong. After the detection of the existence of an unhealthy state, we have other operational procedures that exceed the scope of this talk, so I did not cover these topics here.
16:19: L-Root Update Joe Abley / ICANN
This is an operational update on L-Root, not a talk about the signing of L-Root. Long-Term Query Capture (LTQC) is a tool used by several other root servers. The data from it are stored at OARC. It has the distinct advantage of being targeted and the resultant datasets are small. Beyond what's shown on the slide (#1), we also have other ongoing tasks, such as graphically displaying trends and data. On 2010-01-27, we made a transition from the unsigned root to the DURZ. http://root-dnssec.org/ Questions slide: (1) What else should we measure? (2) What analysis could be done on what we are measuring to identify problems? Comment: How many half-open TCP connections should be allowed before shutting them down? How much of that (measurement) do you keep? We are not keeping that data, and it's a good point. Comment: You don't know how many queries went dark (since DNSSEC went live). Other person's comment: Trying to distinguish between requests and other data. Other person's comment: It's also impossible to know what you don't know. If people have thoughts about what triggers should cause alarm, we would be very interested in capturing that data.
16:41: January 12 Baidu's Attack - What Happened and What Shall We Do? Wang Zheng / CNNIC
Many efforts underway to enhance the security of DNS service, such as DNSSEC, as one instance. The January 12 attack against Baidu is just a reminder to us to keep an eye on the security of the registry system. Prior to the January 12 attack, Baidu.com had only been attacked significantly on one occasionally, in December 2006. Baidu.com's registry is Register.com, based in New York. Chain of events ("at first sight"): 0740 on January 12, Baidu went offlien and traffic was redirected to a website in the Netherlands DNS records were modified, causing the redirection It is believed that Register.com was breached, allowing access to, and the gaining of modification rights, to Baidu.com's records. At 0901, dig showed baiducom's NS records pointing to yns{1,2}.yahoo.com At 0936, baidu.com -> ns230{3,4}.hostgator.com Similarly, the registry information was clearly changed at various points in time during the day. Registrar: Rollback done by register.com at reqeust of Baidu Direct correction of the record was declined by the registrar, due to a claimed lack of authority. [Outline of Registry->Registrar->Registrant chain] Points to consider: do we need special security protections? do we need enhanced communciations between registrant and registrar? Q&A What is the status of the lawsuit against Register.com by Baidu.com? We have no information about this. However, it is likely that the aim of this action is to try to get a clear explanation from Register.com as to why this was so problematic. In a larger sense, this kind of issue needs to be more clearly resolved in order to enhance the stability of the entire Internet. What TTLs were set? How long did it take for Register.com to figure out that Baidu was making a legitimate request? Baidu asked Register.com to reset the records. However, Register.com refused to directly correct the DNS records. The only possible mechanism was a rollback. We do not have all of the operational details of what happened, but the resolution was not handled immediately, and there was a long (measured in hours) time before the records were fully corrected.
16:58: DNSCheck and DNS2db Patrik Wallstrom / .SE
Q&A No questions.
Tuesday Morning: Symposium Keynote Address Andrew Sullivan / Shinkuro