Articles From Matthew Scarpino
Filter Results
Article / Updated 06-09-2023
After you install TensorFlow, you're ready to start creating and executing applications. This section walks through the process of running an application that prints a simple message. Exploring the example code You can download this example code from the "Downloads" link on Wiley.com. The archive’s name is tf_dummies.zip, and if you decompress it, you see that it contains folders named after chapters (ch2, ch3, and so on). Each chapter folder contains one or more Python files (*.py). In each case, you can execute the module by changing to the directory and running python or python3 followed by the filename. For example, if you have Python 2 installed, you can execute the code in simple_math.py by changing to the ch3 directory and entering the following command: python simple_math.py Feel free to use this example code in professional products, academic work, and morally questionable experiments. But do not use any of this code to program evil robots! Launching Hello TensorFlow! Programming books have a long tradition of introducing their topic with a simple example that prints a welcoming message. If you open the ch2 directory in this book’s example code, you find a module named hello_tensorflow.py. This listing presents the code. Hello TensorFlow! """A simple TensorFlow application""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf # Create tensor msg = tf.string_join(["Hello ", "TensorFlow!"]) # Launch session with tf.Session() as sess: print(sess.run(msg)) This code performs three important tasks: Creates a Tensor named msg that contains two string elements. Creates a Session named sess and makes it the default session. Launches the new Session and prints its result. Running the code is simple. Open a command line and change to the ch2 directory in this book's example code. Then, if you’re using Python 2, you can execute the following command: python hello_tensorflow.py If you’re using Python 3, you can run the module with the following command: python3 hello_tensorflow.py As the Python interpreter does its magic, you should see the following message: b'Hello TensorFlow' The welcome message is straightforward, but the application’s code probably isn’t as clear. A Tensor instance is an n-dimensional array that contains numeric or string data. Tensors play a central role in TensorFlow development. A Session serves as the environment in which TensorFlow operations can be executed. All TensorFlow operations, from addition to optimization, must be executed through a session.
View ArticleCheat Sheet / Updated 03-02-2022
TensorFlow is Google’s premier framework for machine learning, and each new version brings a wide range of capabilities and features. After you’ve ascended the learning curve, you can write sophisticated machine-learning applications and execute them at high speed. But rising up the learning curve isn’t easy — with great power comes great complexity. To help you in your climb, you need to be aware of TensorFlow’s data types, the TensorBoard utility, and the deployment of applications to Google’s Machine Learning Engine.
View Cheat SheetArticle / Updated 06-19-2018
Machine learning applications store a great deal of data in vectors (one-dimensional tensors) and matrices (two-dimensional tensors). To process this data, TensorFlow provides many functions that operate on vectors and matrices. The following table lists these functions and provides a description of each. Vector and Matrix Operations Function Description tensordot(a, b, axes, name=None) Returns the sum of products for the elements in the given axes cross(a, b, name=None) Returns the element-wise cross product diag(diagonal, name=None) Returns a matrix with the given diagonal values, other values set to zero trace(x, name=None) Returns the sum of the diagonal elements transpose(x, perm=None, name='transpose') Switches rows and columns eye(num_rows, num_columns=None, batch_shape=None, dtype=tf.float32, name=None) Creates an identity matrix with the given shape and data type matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None) Returns the product of the two input matrices norm(tensor, ord='euclidean', axis=None, keep_dims=False, name=None) Returns the norm of the given axis of the input tensor with the specified order matrix_solve(A, b, adjoint=None, name=None) Returns the tensor x, such that Ax = b, where A is a matrix, and b is a vector qr(input, full_matrices=None, name=None) Returns the eigenvectors and eigenvalues of the given matrix or matrices svd(tensor, full_matrices=False, compute_uv=True, name=None) Factors the matrix into a unitary matrix, a diagonal matrix, and the conjugate transpose of the unitary matrix einsum(equation, *inputs) Executes a custom mathematical operation Of these functions, the two most common are tensordot and matmul. tensordot returns the dot product of one or more axes of two input tensors. That is, tensordot multiplies the corresponding elements of both tensors' dimensions and returns the sum of the products. The axes parameter tells tensordot which dimensions to process. If you set this parameter to a scalar, N, the function will access the last N axes of the first tensor and the first N axes of the second tensor. If you set axes equal to a list or tensor, the first row identifies axes of the first tensor, and the second row identifies axes of the second tensor. I frequently call tensordot to compute the dot product of two one-dimensional tensors. The following code shows what this looks like: t1 = tf.constant([4., 3., 2.]) t2 = tf.constant([3., 2., 1.]) dot = tf.tensordot(t1, t2, 1) <strong># 4*3 + 3*2 + 2*1 = 20</strong> matmul performs traditional matrix multiplication. That is, it multiplies rows of the first tensor by columns of the second tensor and returns a matrix containing the sums. The following code shows how this can be used: t1 = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) t2 = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]]) dot = tf.matmul(t1, t2) <strong># [[ 22. 28.], [ 49. 64.]]</strong> My favorite function in the table is einsum, which makes it possible to create and execute custom mathematical operations. The first parameter is a string that identifies the operation using a special format called the Einstein summation convention. This convention has a number of characteristics, including the following: The operation is assumed to have one or two inputs. If you provide two inputs, you must separate them with a comma. Dimensions of input and output matrices are represented by subscripts (usually i, j, and k). Input subscripts must be separated from output subscripts with the → symbol. If an input's subscript is repeated and no output subscripts are given, the operation performs addition. Therefore, einsum('i,i', t1, t2) computes the dot product of tensors t1 and t2. If an input's subscript is repeated and output subscripts are given, the operation performs multiplication. Therefore, einsum('i,i->i', t1, t2) computes the element-wise product of tensors t1 and t2. The following code calls einsum to transpose a matrix and multiply two matrices together: m1 = tf.constant([[1, 2], [3, 4]]) m2 = tf.constant([[5, 6], [7, 8]]) e1 = tf.einsum'ij-<ji', m1) # [[1, 3], [2, 4]] e2 = tf.einsum('ij,jk-<ik', m1, m2) # [[19, 22], [43, 50]] For a more complete discussion of the Einstein summation convention, try Samuel Prime's presentation.
View ArticleArticle / Updated 06-19-2018
Machine learning applications frequently need exponents and logarithms to compute errors and probability. To meet this need, TensorFlow provides many of the same functions available in NumPy. The following table lists 11 of them and provides a description of each. Exponential and Logarithmic Operations Function Description square(x, name=None) Returns the square of the argument squared_difference(x, y, name=None) Subtracts the first argument from the second and returns the square sqrt(x, name=None) Returns the square root of the argument rsqrt(x, name=None) Returns the reciprocal of the square root pow(x, y, name=None) Returns elements of the first tensor raised to the power of the elements of the second vector exp(x, name=None) Returns the exponential function of the argument expm1(x, name=None) Returns the exponential function of the argument minus one, exp(x) - 1 log(x, name=None) Returns the natural logarithm of the argument log1p(x, name=None) Returns the natural logarithm of the argument plus 1, log(x + 1) erf(x, name=None) Returns the error function of the argument erfc(x, name=None) Returns the complementary error function of the argument These functions are straightforward to use and understand. Each executes in an element-wise manner, and the following code demonstrates how you can call square, sqrt, and rsqrt: t = tf.constant([4.]) t1 = tf.square(t) # 16 t2 = tf.sqrt(t) # 2 t3 = tf.rsqrt(t) # 0.5 The exp function computes the exponential functions of a tensor's elements, and expm1 subtracts 1 from each exponential. If x is a value in the input tensor, the result of expm1 equals exp(x) – 1. Similarly, the log function computes the natural logarithm of a tensor's elements. logp1 adds 1 to the value before the logarithm is computed, so if x is a value in the input tensor, the result of logp1 equals log(x + 1).
View ArticleArticle / Updated 06-19-2018
Most of the mathematical routines accept floating-point values as input and return floating-point values as output. But many applications need to convert floating-point values into integer values. For this reason, TensorFlow provides the rounding operations listed in the following table. Rounding and Comparison Operations Function Description round(x, name=None) Rounds to the nearest integer, rounding up if there are two nearest integers rint(x, name=None) Rounds to the nearest integer, rounding to the nearest even integer if there are two nearest integers ceil(x, name=None) Returns the smallest integer greater than the value floor(x, name=None) Returns the greatest integer less than the value maximum(x, y, name=None) Returns a tensor containing the larger element of each input tensor minimum(x, y, name=None) Returns a tensor containing the smaller element of each input tensor argmax(x, axis=None, name=None, dimension=None) Returns the index of the greatest element in the tensor argmin(x, axis=None, name=None, dimension=None) Returns the index of the smallest element in the tensor The table also lists functions that perform comparisons. These functions return maximum and minimum values, both within a tensor and across two tensors. The round function examines each element of a tensor and returns the closest integer. If two closest integers are equally close, it returns the one further from zero. rint is similar, but rounds to the nearest even value. The following code demonstrates how you can use round, rint, ceil, and floor: t = tf.constant([-6.5, -3.5, 3.5, 6.5]) r1 = tf.round(t) # [-6. -4. 4. 6.] r2 = tf.rint(t) # [-6. -4. 4. 6.] r3 = tf.ceil(t) # [-6. -3. 4. 7.] r4 = tf.floor(t) # [-7. -4. 3. 6.] The next two functions in the table, maximum and minimum, are easy to understand. maximum returns a tensor containing the larger element of each input tensor, and minimum returns a tensor containing the smaller element of each input tensor. argmax and argmin return the index values of the largest and smallest elements of a tensor. The following code shows how you can use these functions: t1 = tf.constant([0, -2, 4, 6]) t2 = tf.constant([[1, 3], [7, 2]]) r1 = tf.argmin(t1) # 1 r2 = tf.argmax(t2) # [ 1 0 ] If a tensor has multiple maximum/minimum values, argmax and argmin will return the index values of the first occurring element.
View ArticleArticle / Updated 06-19-2018
Machine learning applications are fundamentally mathematical, and TensorFlow provides a wealth of routines for performing mathematical operations on tensors. Each routine is represented by a function of the tf package, and each function returns a tensor. When it comes to TensorFlow operations, its best to start simple. The following table lists 12 functions that perform basic math operations. Basic Math Operations Function Description add(x, y, name=None) Adds two tensors subtract(x, y, name=None) Subtracts two tensors multiply(x, y, name=None) Multiplies two tensors divide(x, y, name=None) Divides the elements of two tensors div(x, y, name=None) Divides the elements of two tensors add_n(inputs, name=None) Adds multiple tensors scalar_mul(scalar, x) Scales a tensor by a scalar value mod(x, y, name=None) Performs the modulo operation abs(x, name=None) Computes the absolute value negative(x, name=None) Negates the tensor's elements sign(x, name=None) Extracts the signs of the tensor’s element reciprocal(x, name=None) Computes the reciprocals The first four functions perform element-wise arithmetic. The following code demonstrates how they work: a = tf.constant([3., 3., 3.]) b = tf.constant([2., 2., 2.]) sum = tf.add(a, b) # [ 5. 5. 5. ] diff = tf.subtract(a, b) # [ 1. 1. 1. ] prod = tf.multiply(a, b) # [ 6. 6. 6. ] quot = tf.divide(a, b) # [ 1.5 1.5 1.5 ] Applications can perform identical operations by using regular Python operators, such as +, -, *, /, and //. For example, the following two lines of code create the same tensor: total = tf.add(a, b) # [ 5. 5. 5. ] total2 = a + b # [ 5. 5. 5. ] When operating on floating-point values, div and divide produce the same result. But for integer division, divide returns a floating-point result, and div returns an integer result. The following code demonstrates the difference between them: a = tf.constant([3, 3, 3]) b = tf.constant([2, 2, 2]) div1 = tf.divide(a, b) # [ 1.5 1.5 1.5 ] div2 = a / b # [ 1.5 1.5 1.5 ] div3 = tf.div(a, b) # [ 1 1 1 ] div4 = a // b # [ 1 1 1 ] The div function and the / operator both perform element-wise division. In contrast, the divide function performs Python-style division.
View ArticleArticle / Updated 06-19-2018
An application must specify the shape of each tensor to be created. The tf package provides functions that update tensors and their shapes after creation. This table lists these transformation functions and provides a description of each. Functions for Transforming Tensors Function Description cast(tensor, dtype, name=None) Changes the tensor's data type to the given type reshape(tensor, shape, name=None) Returns a tensor with the same elements as the given tensor with the given shape squeeze(tensor, axis=None, name=None, squeeze_dims=None) Removes dimensions of size 1 reverse(tensor, axis, name=None) Reverses given dimensions of the tensor slice(tensor, begin, size, name=None) Extracts a portion of a tensor stack(tensors, axis=0, name='stack') Combines a list of tensors into a tensor of greater rank unstack(tensor, num=None, axis=0, name='unstack') Splits a tensor into a list of tensors of lesser rank Despite its name, reshape doesn't modify an existing tensor. Instead, the function returns a tensor with the same elements as the given tensor and the specified shape. For example, the following code uses reshape to convert a four-element vector into a 2-x-2 matrix: vec = tf.constant([1., 2., 3., 4.]) mat = tf.reshape(vec, [2, 2]) <strong># Result: [[1. 2.], [3. 4.]]</strong> If any dimension of a tensor has a size of 1, calling squeeze will remove it from the tensor, thereby reducing the tensor's rank. If the function’s axis parameter identifies one or more dimensions, only those dimensions will be affected by squeeze. In the reverse function, the axis parameter identifies one or more dimensions to be reversed. The following code demonstrates how reverse works: mat = tf.constant([[1., 2., 3.], [4., 5., 6.]]) rev_mat = tf.reverse(end, [0]) <strong># Result: [[4. 5. 6.], [1. 2. 3.]]</strong> rev_mat = tf.reverse(end, [1]) <strong># Result: [[3. 2. 1.], [6. 5. 4.]]</strong> rev_mat = tf.reverse(end, [0, 1]) <strong># Result: [[6. 5. 4.], [3. 2. 1.]]</strong> The slice function extracts subtensors from a tensor. The begin parameter identifies the index of the first element to be extracted, and size identifies the shape of the tensor to be extracted, starting from the begin location. For example, suppose that you want to extract the lower-right 2-x-2 matrix from a 3-x-3 matrix. The index of the first extracted element is [1, 1] and the size of the desired tensor is [2, 2]. The following code uses slice to perform this extraction: mat = tf.constant([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.]]) slice_mat = tf.slice(mat, [1, 1], [2, 2]) <strong># Result: [[5. 6.] [7. 8.]]</strong> stack accepts a list of tensors of rank N and returns a single tensor of rank N+1. In addition to having the same rank, the input tensors must have the same shape. The following code demonstrates how stack can be used. The function combines three one-dimensional tensors into a two-dimensional tensor: t1 = tf.constant([1., 2.]) t2 = tf.constant([3., 4.]) t3 = tf.constant([5., 6.]) t4 = tf.stack([t1, t2, t3]) When these operations execute, t4 will equal [[1. 2.] [3. 4.] [5. 6.]]. If the axis parameter is set to 1, stacking will be performed along the second dimension, so t4 will set to [[1. 3. 5.] [2. 4. 6.]]. unstack performs the inverse operation of stack. That is, unstack accepts a tensor of rank N and returns a list of tensors of rank N-1. The num parameter determines how many tensors should be unpacked, and if this isn't set, unstack infers the number from the tensor’s shape.
View ArticleArticle / Updated 06-19-2018
Many TensorFlow applications require tensors that contain random values instead of predetermined values. The tf package provides many functions for creating random-valued tensors and the following table lists five of them. Creating Tensors with Random Values Function Description random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Creates a tensor with normally distributed values truncated_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None) Creates a tensor with normally distributed values excluding those lying outside two standard deviations random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None) Creates a tensor with uniformly distributed values between the minimum and maximum values random_shuffle(tensor, seed=None, name=None) Shuffles a tensor along its first dimension set_random_seed(seed) Set the seed value for all random number generation in the graph The random_normal and truncated_normal functions create tensors containing normally distributed values. Their arguments determine the characteristics of the distribution. This figure shows what a normal distribution looks like with a mean of 0.0 and a standard deviation (σ) of 1.0. Standard deviation tells you how much a normally distributed variable is expected to vary from the mean. Approximately 68.2 percent of the time, a variable lies within one standard deviation from the mean, while 95.4 percent of the time, the variable lies within two standard deviations. In the random_normal and truncated_normal functions, the default mean is 0.0, and the default standard deviation is 1.0. random_normal generates random values throughout the distribution, so very large and very small values are unlikely but possible. The following code calls random_normal to generate 20 random values: rnd_ints = tf.random_normal([10], dtype=tf.float64) In contrast, truncated_normal guarantees that the generated values lie within two standard deviations from the mean. Any value outside this range will be discarded and reselected. In this manner, truncated_normal ensures that the tensor won't contain any improbably large or small values. random_uniform creates a tensor containing uniformly distributed values that lie between a minimum and maximum. Because the distribution is uniform, every value is equally likely. random_shuffle doesn't create a new tensor, but randomly shuffles the values in an existing tensor. This shuffling is limited to the tensor’s first dimension. Each function in the table accepts a seed parameter that initializes the random number generator. Setting a random seed is important to ensure that sequences aren’t repeated. You can obtain and set a seed value by calling set_random_seed, which accepts a floating-point value and makes the argument the seed for every operation in the current graph.
View ArticleArticle / Updated 06-19-2018
Just as most programs start by declaring variables, most TensorFlow applications start by creating tensors. A tensor is an array with zero or more dimensions. A zero-dimensional tensor is called a scalar, a one-dimensional tensor is called a vector, and a two-dimensional tensor is called a matrix. Keep in mind these three points about tensors: Every tensor is an instance of the Tensor class. A tensor may contain numbers, strings, or Boolean values. Every element of a tensor must have the same type. Tensors can be created, transformed, and operated upon using functions of the tf package. The tf package provides seven functions that form tensors with known values. The following table lists them and provides a description of each. Creating Tensors with Known Values Function Description constant(value, dtype=None, shape = None, name = 'Const', verify_shape=False) Returns a tensor containing the given value zeros(shape, dtype=tf.float32, name = None) Returns a tensor filled with zeros ones(shape, dtype=tf.float32, name=None) Returns a tensor filled with ones fill(dims, value, name=None) Returns a tensor filled with the given value linspace(start, stop, num, name=None) Returns a tensor containing a linear range of values range(start, limit, delta=1, dtype=None, name='range') Returns a tensor containing a range of values range(limit, delta=1, dtype=None, name='range') Returns a tensor containing a range of values A tensor may have multiple dimensions, and the number of dimensions in a tensor is its rank. The lengths of a tensor’s dimensions form an array called the tensor’s shape. Many of the functions in the table accept a shape parameter that identifies the desired shape of the new tensor. The following examples demonstrate how you can set this parameter: [] — The tensor contains a single value. [3] — The tensor is a one-dimensional array containing three values. [3, 4] — The tensor is a 3-x-4 matrix. [3, 4, 5] — The tensor is a multidimensional array whose dimensions equal 3, 4, and 5. Most of the functions in the table have a dtype argument that identifies the data type of the tensor's elements. The default value of dtype is float32, which indicates that, by default, tensors contain single-precision floating-point values. The following table lists float32 and other possible data types. Tensor Data Types Data Type Description bool Boolean values uint8/uint16 Unsigned integers quint8/quint16 Quantized unsigned integers int8/int16/int32/int64 Signed integers qint8/qint32 Quantized signed integers float16/float32/float64 Floating-point values complex64/complex128 Complex floating-point values string Strings Each function in the preceding table accepts an optional name argument that serves as an identifier for the tensor. Applications can access a tensor by name through the tensor's graph. The constant function The most popular function in the table is constant. Its only required argument is the first, which defines the value or values to be stored in the tensor. You can provide these values in a list, and the following code creates a one-dimensional tensor containing three floating-point values: t1 = tf.constant([1.5, 2.5, 3.5]) Multidimensional arrays use similar notation. The following code creates a 2-x-2 matrix and sets each of its elements to the letter b: t2 = tf.constant([['b', 'b'], ['b', 'b']]) By default, TensorFlow won’t raise an error if the function’s first argument doesn’t have the shape given by the shape argument. But if you set the last argument, verify_shape, to True, TensorFlow will verify that the two shapes are equal. The following code provides an example of mismatched shapes: t3 = tf.constant([4, 2], tf.int16, [3], 'Const', True) In this case, the given shape, [3], doesn't match the shape of the first argument, which is [2]. As a result, TensorFlow displays the following error: TypeError: Expected Tensor's shape: (3,), got (2,). zeros, ones, and fill The functions zeros, ones, and fill create tensors whose elements all have the same value. For zeros and ones, the only required argument is shape, which identifies the shape of the desired tensor. As an example, the following code creates a simple 1-x-3 vector whose elements equal 0.0: zero_tensor = tf.zeros([3]) Similarly, the following function call creates a 4-x-4 matrix whose elements equal 1.0: one_tensor = tf.ones([4, 4]) The fill function requires a value parameter, which sets the value of the tensor's elements. The following code creates a three-dimensional tensor whose values are set to 81.0: fill_tensor = tf.fill([1, 2, 3], 81.0) Unlike zeros and ones, fill doesn't have a dtype argument. It can only create tensors containing 32-bit floating point values. Creating sequences The linspace and range functions create tensors whose elements change regularly between a start and end value. The difference between them is that linspace creates a tensor with a specific number of values. For example, the following code creates a 1-x-5 tensor whose elements range from 5.0 to 9.0: lin_tensor = tf.linspace(5., 9., 5) <strong># Result: [5. 6. 7. 8. 9.]</strong> Unlike linspace, range doesn't accept the number of elements in the tensor. Instead, it computes successive elements by adding a value called a delta. In the following code, delta is set to 0.5: range_tensor = tf.range(3., 7., delta=0.5) <strong># Result: [3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5]</strong> Like Python's range function, TensorFlow’s range function can be called without the start parameter. In this case, the starting value is assumed to be 0.0. The following code demonstrates this: range_tensor = tf.range(1.5, delta=0.3) <strong># Result: [0.0 0.3 0.6 0.9 1.2]</strong> If the delta parameter is positive, the starting value must be less than the ending value. If delta is negative, the starting value must be greater than the ending value.
View ArticleArticle / Updated 06-19-2018
If you succeeded in launching jobs locally, deploying your applications to the cloud shouldn't present any difficulty. But be mindful of two issues: You need to upload training/evaluation data to Cloud Storage. The ML Engine may not support the versions of the packages you need. Before you execute either of the applications in the ch13 directory of the TensorFlow For Dummies downloadable code, you’ll need to upload the mnist_test.tfrecords and mnist_train.tfrecords files to a Cloud Storage bucket. For example, if your project's ID is $(PROJECT_ID), you can create a bucket named $(PROJECT_ID)_mnist in the central United States with the following command: gsutil mb -c regional -l us-central1 gs://$(PROJECT_ID)_mnist After you create the bucket, you can upload the two MNIST files to the bucket with the following command: gsutil cp mnist_test.tfrecords mnist_train.tfrecords gs://$(PROJECT_ID)_mnist After the command executes, it's a good idea to check that Cloud Storage created objects for the two files. You can verify this by running the command gsutil ls gs://$(PROJECT_ID)_mnist. Running a remote training job After you upload your test/evaluation data, you can launch a training job with the following command: gcloud ml-engine jobs submit training $(JOB_ID) $(JOB_ID) provides a unique identifier for the training job. After you launch the job, you can use this ID to check on the job's status. In addition to identifying the job, you need to tell the ML Engine where to find your package and your input data. You also need to tell the engine where it should store output files. You can provide this information by following the command with flags, and this table lists each of them. Flags for Cloud Training Jobs Flag Description --module-name=MODULE_NAME Identifies the module to execute --package-path=PACKAGE_PATH Path to the Python package containing the module to execute --job-dir=JOB_DIR Path to store output files --staging-bucket=STAGING_BUCKET Bucket to hold package during operation --region=REGION The region of the machine learning job --runtime-version=RUNTIME_VERSION The version of the ML Engine for the job --stream-logs Block until the job completes and stream the logs --scale-tier=SCALE_TIER The job's operating environment --config=CONFIG Path to a job configuration file The --module-name, --package-path, and --job-dir flags serve the same purposes as the similarly named flags for local training jobs. The --staging-bucket flag identifies the bucket to hold the deployed package. The --region flag accepts one of the regions listed in the table. By default, deployed applications run on the latest stable version of the ML Engine. You can configure this by setting the --runtime-version flag. You can get the list of versions at cloud.google.com/ml-engine/docs/runtime-version-list. Set the --stream-logs flag because it forces the command to block until the job completes. As the job runs, the console prints messages from the remote log. Aborting the command (Ctrl-C) doesn't affect the remote job. By default, applications uploaded to the ML Engine can run only on a single CPU. You can configure the execution environment by setting the --scale-tier flag to one of the values listed in this table. Scale Tier Values Value Description basic A single worker on a CPU basic-gpu A single worker with a GPU basic-tpu A single worker instance with a Cloud TPU standard-1 Many workers and a few parameter servers premium-1 A large number of workers and many parameter servers custom Define a cluster If you set --scale-tier to basic-gpu, you can execute your code on an Nvidia Tesla K80 GPU. This has 4,992 CUDA cores and 24 GB of GDDR5 memory. If you set --scale-tier to basic-tpu, you can execute your code on one or more of Google's Tensor Processing Units (TPUs). At the time of this writing, Google restricts TPU access to developers in its Cloud TPU program. If you set --scale-tier to standard-1 or premium-1, you can run your job on a cluster of processors. If you set --scale-tier to custom, you can configure the cluster by assigning the --config flag to the name of a configuration file. Running a remote prediction job If you upload a SavedModel to a Cloud Storage bucket, you can launch a prediction job with the following command: gcloud ml-engine jobs submit prediction $(JOB_ID) This command accepts flags that specify where the prediction job should read its input and write its output. This table lists each of these flags. Flags for Cloud Prediction Jobs Flag Description --model-dir=MODEL_DIR Path of the bucket containing the saved model --model=MODEL Name of the model to use for prediction --input-paths=INPUT_PATH, [INPUT_PATH,…] Path to the input data to use for prediction --data-format=DATA_FORMAT Format of the input data --output-path=OUTPUT_PATH Path to store the prediction results --region=REGION The region of the machine learning job --batch-size=BATCH_SIZE Number of records per batch --max-worker-count=MAX_WORKER_COUNT The maximum number of workers to employ for parallel processing --runtime-version=RUNTIME_VERSION The version of the ML Engine for the job --version=VERSION Version of the model to be used When you launch a remote prediction job, you must identify the model's name with --model or the bucket containing the model files with --model-dir. You also need to identify the location of the input files with --input-paths. The ML Engine accepts prediction input data in one of three formats. You can identify the format of your data by setting --data-format to one of the following values: text: Text files with one line per instance tf-record: TFRecord files tf-record-gzip: GZIP-compressed TFRecord files The last required flag is --output-path. This tells the ML Engine which Cloud Storage bucket should contain the prediction results. Viewing a job's status After you launch a job, you can view the job’s status in two ways. First, you can use gcloud commands, such as the following: gcloud ml-engine jobs list: List the jobs associated with the default project along with their statuses and creation times gcloud ml-engine jobs describe $(JOB_ID) --summarize: Provide detailed information about a specific job in human-readable format To check on a job, visit the Google Cloud Console. If you click the menu bars in the upper left and scroll down, you see an entry titled ML Engine. This entry leads to two options: Jobs and Models. If you click the ML Engine→ Jobs option, the page lists all the jobs associated with the project. If you click on a job name, a new page provides detailed information about the job's execution, including its status and any log messages.
View Article