基于cs20si的Tensorflow笔记，课程主页；

本节主要内容：Graphs and Sessions

## Tensor

#### An n-dimensional array

- 0-d tensor: scalar (number)，标量
- 1-d tensor: vector
- 2-d tensor: matrix
- and so on

## Data Flow Graphs

- Nodes: operators, variables, and constants
- Edges: tensors

1 | import tensorflow as tf |

TF automatically names the nodes when you don’t explicitly name them:x = 3 , y = 5

#### 计算图

Create a session, assign it to variable sess so we can call it later.Within the session, evaluate the graph to fetch the value of a.

1 | import tensorflow as tf |

一种更加高效的写法：

1 | import tensorflow as tf |

**tf.Session()** : A Session object encapsulates(包含) the environment in which Operation objects are executed, and Tensor objects are evaluated.

#### More graphs

1 | import tensorflow as tf |

_note:tf.mul, tf.sub and tf.neg are deprecated in favor of tf.multiply, tf.subtract and tf.negative_

1 | # 省略 import tensorflow as tf，下同 |

_note:Because we only want the value of pow_op and pow_op doesn’t

depend on useless, session won’t compute value of useless

→ save computation_

1 | x = 1 |

_tf.Session.run(fetches, feed_dict=None,

options=None, run_metadata=None) :

pass all variables whose values you want to a list in fetches_

## Distributed Computation

1 | # To put part of a graph on a specific CPU or GPU: |

**不要构建多个图：** The session runs the default graph

- Multiple graphs require multiple sessions, each will try to use all available resources by default
- Can’t pass data between them without passing them through

python/numpy, which doesn’t work in distributed - It’s better to have disconnected subgraphs within one graph
## Why Graphs

1.Save computation (only run subgraphs that lead

to the values you want to fetch)

2.Break computation into small, differential pieces

to facilitates auto-differentiation(自动微分)

3.Facilitate distributed computation, spread the

work across multiple CPUs, GPUs, or devices

4.Many common machine learning models are commonly taught and visualized as directed graphs already