charter summary comment - lee howard - measured observations about 6 being faster than 4, might be a complexity related topic david meyer - sdn and the hidden nature of complexity Title: bio-techno convergence and the hidden nature of complexity goal - open up thinking of what the essential architectures of networks are, how do they provide both robustness/fragility - correlation between bio and network premise - robust yet fragile (RYF) - what makes the robustness causes the fragility, often catastrophic bio/tech - striking levels of convergence, pieces are different, but at very high levels, they are the same what is cell/ecosystem - complexity is robustness to uncertainty to env and components (graph - robust vs complex) theorem C <= 1/R (C complex, R robust) definition robust, fragile, robust yet fragile -> example: vrrp, robust to specifics, but fragile to newly introduced issues (protocol, heartbeat, etc). bio examples, human example, tech example scalability is robustness, evolvability robust on time scale H(p,nz) < nH(p,z) for 0 < nz < K (equation defining fragility) further from the mean, the "damage" becomes massive RYF - affects all complex systems "hidden" nature of complexity easier to create robust, than prevent fragility (?impossible?) (conservation laws also at work) Weaver's classification of problem Universal architecture building blocks: transcription networks and bio vs eng circuits (DNA flow chart) signals transcription factors (little machines) promoters genes proteins network motifs feed forward loops UABB ryf complex bowtie/hourglass arch protocol based arch mass. distr. with robust control loops highly layered degeneracy (exists in bio, but not tech) "constraints that deconstrain" processing L - 1 info (raw materials) flow fragile to attacks on the standardized interface protocol hourglass architecture (bowtie on it's side) all systems that scale have this "waist" NDN - named data networking (named-data.net) (similar waist) "optimization decomposition" each layer is abstracted as an optimization problem biology vs interenet main difference - autocatalytic feedback change of way of thinking, internet is reaching biological complexity, deep understanding of necessary interplay between complexity and robustness, modularity, feedback, and fragility multi-disciplinary approach in the biological sciences (dimitri p) perspective no self organization vs emergence - good theoretical tools/points to bring together (dave meyer) weaver classification of problem - problem of simplicity. then disorganized complexity, organized complexity - these are the interesting problems "chaos" period - disregarded the organization of networks (dimitri p) micro vs macro approach, scale of the system is important, microscopic description, describe limits of the system, (dave) needs mathematical rigor that may not exist (dimitri p) find the simplification that is tractable(?????) (michael) have the equations, but only for a subset (graph theory), operational complexity-> what is causing me problems? softer complexity, fairly well understodd, but only one aspect (dimitri p) two body problem (dave) the dynamics are what become interesting (dimitri p) graph is a stationary view, limits the information Complexity Framework Discussion (Michael Behringer) understand small things and work up we know where complexity comes from y=cost, x=scale - cannot optimize all axes design decision - cheap, fast, good - pick two gratuituous complexity - does not add robustness don't want to reduce scale and increase cost, for instance == bad define the few elements that we understand and move up from there cost (capex + opex) = cost of ownership is a good indicator of complexity bandwidth - traffic config complexity - how hard to configure/maintain susceptibility to attack add feature->add attack vector (complexity spiral) security scalability extensibility - can I grow it? ease of troubleshooting - negative: manual ipsec config, positive: dynamic ipsec config (achieved by removing configuration, adding negotiation) predictability - hard to predict now clean failure - can you isolate where the failure occurred? new metrics (draft name) new framework draft outline looking for contributions, low activity RG lot of papers on complexity -> define the overall problem space without having all the answers components of complexity (what) physical network state in the network network churn algorithms in use location of complexity (where) topology logical layered (fixed in layer 3, etc) dependencies analyze dependencies - local-> CoS map dependent on interface network-wide: dns,ntp external: bgp policy, etc when complexity bites us, it is often due to an unknown dependency ->example: link util is 5%, but bursts are at the MICROSECOND, so never showing up - physical limitation unknown to protocol designer management interactions configuration complexity troubleshooting complexity monitoring system integration ocmplexity external interaction user end system inter-network interaction map out 5 dimensional graph - map pieces to it (dimitri p) metrics are not metrics they are network states, how much information od you need to describe the state of a network? how many pieces do I need to define this? where is it located? how can I retrieve data? what are the operational dependencies? the structural mapping is part of the research itself how do you describe network states? define before making the mapping of complexity (michael) example, sdn is everywhere, but debate about central/distributed. take routing: either on the netwoek devices or on the controller, comare the two with different topology algo1 = local, algo2 = distributed - we can compare them (dimitri p) methods can be defined, can you define the global state of routing? (michale) mapping protocols and state into network (dimitri p) link macroscopic view with component view (mich) sugg? (dimitri p) methods being described should be documented for others (dimitri p) has to combine both engineering view and scientific view (michael-eng vs dave-sci) what is the actual complexity of making a change? (mich) start simple, (simple network + controller + 2 prefixes) measure and then extrapolate does it matter how much state you are shuttling(?) around (dimitri p) complexity means scale - top down, agent based modeling (ibm = individual based modelling) what has to have dimensions that you can measure? critical element that might be missing (mich) reduce simply, and build from there HELP US :) (dimitri p) the forumlation itself is the difficulty of the problem itself, would like more details of methods (mich) sdn comparison and take an overlay network (ipsec) and compare with "cloud" (mpls), intuitively, depending on the size of the networks, tunnel would be simplest but larger, "cloud" is the scale - there will be an inflection point somewhere between two node vs MANY node, which is easier than the other - depends on scale (dimitri p) link to scientific research - compare bio to information - we don't have the mass of the elements, find the physics based blocks is harder in info systems (mich) hope that soeon grabs idea and runs with it, documenting approaches (alex st??)what is the ultimate goal of the framework? theory vs framework? what are we expecting to build if a framework? (mich) latter - framework - can base on exist research and map into the framework draft, no one playing in the management interaction area, can we see where there is not much work being done (dimitri p) layering considerations - we have to think about what these considerations are - human interaction to assign the considerations - do we want to consider this as being a science or layering of engineering decisions, predictability is layering the right aanswer to deal with the complexity? (mich) layering reduce complexity, tried to look at layering - layers are assumed to be seperate, but they are NOT, amount of energy to maintain state (steady on the outside, churn inside) no strict layering, no black box, there are exceptions that break later - unknown dependencies RCA on caatastrophic failure - entire network meltdown true root cause analysis - for the final condition, 6 pre-conditions had to be true - across design, software, architecture, operations could have been avoided with blocks at various different points document catastrophic failure rigorously as examples - can derive top down information on where the complexity is coming from don't know what complexity is but do know when it bites us (lee howard) like the approach of examples, keep applying the exmaples. where do we list the hundred axes that we are trying to compare? tradeoffs have to be made I get to choose my tradeoffs - include negative cases, not just catastrophic failures (mich) BGP med problem, can we reverse engineer the failure to see where it fails? (dimitri p) certain phenomena happen due to design -> loop avoudance was not in early routing procotolcs-> bgp allowed that there would be loops and built this in are the causes part of the design aspects that cause the failures? (mich) MED oscillation is complexity because we did not foresee it complexity - never talking about the user interactions scientifically - the most important part (opinion) (dave) data center scale doesn't involve users (lee howard) I can't predict what some random person is going to do - can't predict someone throwing a baseball through a window - a sufficiently complex system might not actually be predictable be robust to the perturbations (mich) can we get to 80%? (lee) (dimitri p) do we have the tools to do this? we might not have the tools to do correct design to know that failures - add to framework? what is the perspective that will describe the framework(???) scope the work, define correctly (mich) there is a risk of overengineering (dimitri p) suggest - what do the other sciences say about complexity? combine them and go from there. need updates to dave's presentation, extrapolate (mich) framework document needs work - need volunteers/help - need time to write concrete, call for input (dimitri p) google doc or somesuch to show live edits/input (mich) easy to take a section at a time, harder when you are working on the whole thing (lee) keep going - and work it through - good start - get started! (back to lee's question) unforeseen implication of tradeoffs - optimized for X, but really Y happened is it because we haven't mucked with the protocol yet as much as v4? can we put rigor around the measurements? (dimitri p) moving towards a performance metric - needs to be addressed as evolves - how do we put the elements together? optimization vs robustness? the work is about quantifying elements (mich) maybe because the security is not in place (alex) why is this relevant? multiple applications for this - specifically define what those applications are (mich) how can we use the framework? (dimitri p) the inverse of complexity is not necessarily simplicity (mich) complexity is not bad (dave) but it's necessary, entire industry thrives on this - bells and whistles sell routers (dimitri p) scientific domains about bringing up omcplexity to enhance final words BEER