Under the guest direction of Tony Award-winner Dominique Serrand, Stanford cast and crew explore age-old themes from a 17th century https://www.youtube.com/watch?v=FJSG0wr40F4Spanish play while incorporating modern questions of gender and ambiguity.

Music: "Dunes" by Podington Bear
http://bit.ly/2EUqO13


Stanford School of Medicine researcher Christopher Gardner's recent study on individual predisposition to different kinds of diets yields new insight on the great Low-Carb vs Low-Fat Debate.


In her Mathematics Research Center Public Lecture, “Breaking Codes and Finding Patterns,” Professor Susan Holmes will discuss what we can learn from the master codebreakers who solved the intricacies of the Enigma encryption machine during World War II and how to leverage patterns using mathematics and statistics.



Former Olympian Rachael Flatt, ’15, draws on her figure skating career to build mental-health tools for athletes


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Walter Powell, Professor, Stanford Graduate School of Education and (by courtesy) Sociology, Organizational Behavior, Management Science & Engineering, Communication at Stanford University dives into these points...

1. What factors make distinctive network configurations possible at particular points in time and space? How does a collection of diverse organizations emerge and form a field?
2. The critical factors that allow networks of collaboration to emerge are: the presence of multiple types of organizational forms, an anchor tenant that protects the value of openness, and cross-network transposition.
3. Diversely anchored, multi-connected networks are much less likely to unravel than networks reliant on a few elite organizations, and the organizing practices of such networks are more likely to be resilient to perturbations.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Jennifer House, mediaX Distinguished Visiting Scholar and Redrock Consulting and Michael Carter, Director, Produce at TextGenome.org discuss government funding in education.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Vinay Chaudhri, Former Head AI, SRI International looks at this issues...

1. Simply changing the medium of presentation has no impact on improving student learning.
2. An intelligent textbook relies on knowledge representation and reasoning to provide concept summaries, suggested questions and question answering.
3. The resulting learning gains are in the range of one full letter grade.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Paulo Blikstein, Assistant Professor, Stanford Graduate School of Education (and by courtesy) Computer Science st Stanford University discusses these points...

1. When we talk about machine learning or teaching machines, we're also altering our metaphor of human learning.
2. The use of teaching machines has an 80-year history, but the results are not encouraging.
3. Educational researchers mostly know why these attempts fail, but communication between educators, cognitive scientists, and technologists is faulty.
4. Some areas of application of AI have shown promise in education, but their business models are still ill-defined.
5. The biggest impact of AI in education might come from applications that do not even exist today, and will likely not come from the replacement of teachers or legacy education infrastructure.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Prasad Ram, Founder, Gooru discusses his companies role in the AI for Learning and Understanding space.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Lewis Johnson, President/CEO, Alelo Inc. examines...

1. Cloud-based cognitive services can accelerate the deployment and scaling of AIED systems.
2. Moving to the cloud provides access to volumes of learner data, which has profound implications for instructional design.
3. Technical and organizational barriers offer opportunities for innovators to occupy the high ground in the digital learning landscape.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Martha Russell, Executive Director of mediaX at Stanford University kicks off the symposium and looks at these critical points…

1. Innovation ecosystems are sustainable business networks built on collaboration, meant for producing innovation in a non-linear way, through the collective action of legally independent actors that increasingly relies on horizontal, peer-to-peer linkages among different agents.
2. They are differentiated from other types of business networks by patterns of interactions, receptivity to feedback and innovation capacity in responding to changing conditions.
3. Seen through the lens of complexity thinking, innovation ecosystems are open non-linear systems characterized by multi-faceted motivations and undergoing persistent transformations through recombinatorial patterns of interactions, implying a holistic integrity of partners’ mutual activities and governance.
4. The vitality and resilience of innovation ecosystems can be fostered by increasing the number of network nodes, promoting the quantity and quality of feedback linkages, encouraging autonomous relational contracts, removing inner and outer communication gaps, cultivating shared vision of interdependencies and collective resources, and maintaining a balance of exploration and exploitation.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Bruce McCandliss, Professor, Stanford Graduate School of Education and (by courtesy) Psychology at Stanford University addresses these points...

1. Setting the stage of the eco-system for educational innovation: who are the stake-holders? What limits their engagement and innovation?
2. AI research plays out at multiple, interacting levels of complexity: i.e. neural systems (biological and artificial), the whole developing child, the school classroom and school district level.
3. Systems neuroscience may provide a key level of explanation for a learner’s trajectory through a learning environment.
4. Educational systems have increasing degrees of freedom for differentiated instruction and engagement.
5. AI, when combined with insights from the Learning Sciences, may provide decision relevant information to enhance children’s development during the early school years, enabling schools to become learning institutions that learn.


From the November 13th 2017 Symposium “Innovation Ecosystems for AI-Based Education, Training and Learning” Michael Kirst, Emeritus Professor, Stanford Graduate School of Education & President, California Board of Education looks at this critical points...

1. California lacks the public postsecondary capacity to satisfy the current workforce and future need for 4 year college degrees, and the increasing number of K-12 students who meet entrance qualifications.
2. Data about the ecology of postsecondary entities providing lifelong learning is badly lacking. We found 350 providers in the San Francisco Bay Area, but only about a third were in federal data bases.
3. The California Master Plan For Higher Education, approved in 1960, is not designed to meet the current or future workforce needs of the state, and has no strategy to meet the needs of students 25-55 years old, or integrate a complex private postsecondary education sector.


From the November 1st, 2017 Human AI Collaboration: A Dynamic Frontier Conference; Poppy Crum, Poppy Crum Chief Scientist, Dolby Laboratories and Stanford Adjunct Professor, CCRMA examines...

1. Our perceived experience of sensory information in the world is malleable, contextually and experientially dependent, probabilistic in approximate description, and predictable with some uncertainty.
2. Our priors differentiate each of our perceptual experiences across a constant input to our sensory systems. In this way, bias from our priors is one component influencing how we collectively most often end up with a heterogeneous experience of homogeneous sensory input.
3. For technology to optimally engage and enhance each of our individual human systems it must be: personalized to our sensory/biological baselines, learning and integrating semantic and contextual information of our surroundings and internal state, and interfacing effective dimensionally reduced data to the user.
4. Closing the technology/physiology loop and allowing always on sensors to objectify our internal states has great possibilities, but it means the age of the poker face may come to an end. How will this change our social behaviors? How can we use this to enhance the qualities that make us most human?