In this lesson we will look at how to create and visualise a graph using TensorBoard. We lightly went over TensorBoard in our 1st lesson on variables
So what is TensorBoard and why would we want to use it?
TensorBoard is a suite of web applications for inspecting and understanding your TensorFlow runs and graphs. TensorBoard currently supports five visualizations: scalars, images, audio, histograms, and graphs.
The computations you will use in TensorFlow for things such as training a massive deep neural network, can be fairly complex and confusing, TensorBoard will make this a lot easier to understand, debug, and optimize your TensorFlow programs.
To see a TensorBoard in action, you can play with an interactive demo TensorBoard here.
This is what a TensorBoard graph looks like:
The basic script
Below we have the basic script for building a TensorBoard graph. Right now, all this will return if you run it in a python interpreter, is 63.
Now we add a SummaryWriter to the end of our code, this will create a folder in your given directory, Which will contain the information for TensorBoard to build the graph.
If you were to run the TensorBoard now, with tensorboard --logdir=path/to/logs/directory, you would see that in your given directory you get a folder named ‘output’.
If you navigate to the ip address in your terminal, It will take you to your TensorBoard, Then if you click graphs you will see your graph.
At this point the graph is kind of all over the place and is fairly hard to read. So lets name some of the parts to make it more readable.
Adding names
In the code below we have only added one parameter a few times. name=[something].
This parameter will take the selected area and give it a name on the graph.
Now if you re-run your python file and then run tensorboard --logdir=path/to/logs/directory again, you will now see that your graph has some names on
the specific parts you named. However it is still very messy and if this was a huge neural network it would be next to impossible to read.
Creating scopes
If we give the graph a name by typing with tf.name_scope("MyOperationGroup"): and give the graph a scope like this with tf.name_scope("Scope_A"):.
when you re-run your TensorBoard you will see something very different. The graph is now much more easier to read, and you can see that it all comes under the
graph header, In this case that is MyOperationGroup, and then you have your scopes A and B, Which have there operations within them.
#Here we are defining the name of the graph, scopes A, B and C.withtf.name_scope("MyOperationGroup"):withtf.name_scope("Scope_A"):a=tf.add(1,2,name="Add_these_numbers")b=tf.mul(a,3)withtf.name_scope("Scope_B"):c=tf.add(4,5,name="And_These_ones")d=tf.mul(c,6,name="Multiply_these_numbers")withtf.name_scope("Scope_C"):e=tf.mul(4,5,name="B_add")f=tf.div(c,6,name="B_mul")g=tf.add(b,d)h=tf.mul(g,f)
As you can see, the graph is now a lot easier to read.
In this lesson we looked at:
The basic layout for a TensorBoard graph
Adding the Summary writer to build a TensorBoard
Adding names to the TensorBoard graph
Adding a name and scopes to the TensorBoard
Exercises
There’s a great 3rd party tool called TensorDebugger (TDB), TBD is as it says a debugger. But unlike the standard debuggers that are built into the TensorBoard,
TBD interfaces directly with the execution of a TensorFlow graph, and allows for stepping through execution one node at a time. Where as the standard TensorBoard
debugger cannot be used concurrently with running a TensorFlow graph and log files must be written first.
Install TBD from here and read the material (try the demo!).
Use TBD with this gradient descent code, Plot a graph showing the debugger working through the results and print the predicted model.
importtensorflowastfimportnumpyasnp# x and y are placeholders for our training datax=tf.placeholder("float")y=tf.placeholder("float")# w is the variable storing our values. It is initialised with starting "guesses"# w[0] is the "a" in our equation, w[1] is the "b"w=tf.Variable([1.0,2.0],name="w")# Our model of y = a*x + by_model=tf.mul(x,w[0])+w[1]# Our error is defined as the square of the differenceserror=tf.square(y-y_model)# The Gradient Descent Optimizer does the heavy liftingtrain_op=tf.train.GradientDescentOptimizer(0.01).minimize(error)# Normal TensorFlow - initialize values, create a session and run the modelmodel=tf.initialize_all_variables()withtf.Session()assession:session.run(model)foriinrange(1000):x_value=np.random.rand()y_value=x_value*2+6session.run(train_op,feed_dict={x:x_value,y:y_value})w_value=session.run(w)print("Predicted model: {a:.3f}x + {b:.3f}".format(a=w_value[0],b=w_value[1]))
These special icons are used for constants and summary nodes.