Inference over geometric objects

Robert Woodbury’s presentation, Purposes of geometry in the Semantic Web. I missed the very start

  • Sub-parts not defined in the SVG tree, where’s recursion?
  • SVG provides tools for interacting with semantic information.
  • SVG planets demo, orbiting planets showing orbit spans, (lifted from some Oz guys)
  • Specify patterns in SVG geometry, if SVG is going to be interesting to designers, needs to be representing patterns, and infer things.
  • rules to combine lines into other lines - ie convert a red/blue overlapped line into a single line - creates a minimal number of lines.
  • Design Process - Brief, Layout, Massing, Detailing
  • DAML and SVG representation of each stage, Describing room layouts
  • Sem. functional, Spatial room layout, Spatial wall representation, Sem. wall representation.
  • DAML creates functional layout.
  • SVG representation converts that to a room layout.
  • It’s not the rooms that matter, it’s the walls.
  • Transfrom SVG Rects, into SVG lines.
  • RDF rep. of walls, has_function, has_room_rightSide, etc.
  • Extending from info-viz to design/production - it needs an inference layer.
  • I ask “Why an SVG view at all?” Geometric data is info, 1st thing you do with geometric data is visualise let the svg view have it. I’m still not convinced that SVG needs an inferencing capability, but Rob did suggest that, that was my geek bias.

Comments