@@ -234,3 +234,84 @@ offers a set of useful utility functions, such as:
234234For now, the migration from a specific library (e.g., NumPy) to a standard
235235compatible setup requires a manual intervention for each failing API call,
236236but, in the future, we're hoping to provide tools for automating the migration process.
237+
238+ ## Migration patterns for selected libraries
239+
240+ Below, you can find a non-exhaustive list of API calls that are present in NumPy
241+ and PyTorch but are not supported by the Array API Standard. For each of them, we
242+ provide the recommended alternative from the standard, along with some notes on
243+ how to use it.
244+
245+ ### NumPy
246+
247+ | NumPy API | Array API | Notes |
248+ | --- | --- | --- |
249+ | ` np.transpose(x, axes) ` | ` xp.permute_dims(x, axes) ` | ` None ` is not supported |
250+ | ` np.concatenate(...) ` | ` xp.concat(...) ` | |
251+ | ` np.power(x, y) ` | ` xp.pow(x, y) ` | |
252+ | ` np.absolute(x) ` | ` xp.abs(x) ` | |
253+ | ` np.invert(x) ` | ` xp.bitwise_invert(x) ` | |
254+ | ` np.left_shift(x, n) ` | ` xp.bitwise_left_shift(x, n) ` | |
255+ | ` np.right_shift(x, n) ` | ` xp.bitwise_right_shift(x, n) ` | |
256+ | ` np.arcsin(x) ` | ` xp.asin(x) ` | |
257+ | ` np.arccos(x) ` | ` xp.acos(x) ` | |
258+ | ` np.arctan(x) ` | ` xp.atan(x) ` | |
259+ | ` np.arctan2(y, x) ` | ` xp.atan2(y, x) ` | |
260+ | ` np.arcsinh(x) ` | ` xp.asinh(x) ` | |
261+ | ` np.arccosh(x) ` | ` xp.acosh(x) ` | |
262+ | ` np.arctanh(x) ` | ` xp.atanh(x) ` | |
263+ | ` np.bool_ ` | ` xp.bool ` | |
264+ | ` np.array(x) ` | ` xp.asarray(x) ` | |
265+ | ` np.ascontiguousarray(x) ` | ` xp.asarray(x, copy=True) ` | Use with ` copy=True ` to ensure contiguous array |
266+ | ` x.astype(dtype) ` | ` xp.astype(x, dtype) ` | |
267+ | ` np.unique(x) ` | ` xp.unique_values(x) ` | |
268+ | ` np.unique(x, return_counts=True) ` | ` xp.unique_counts(x) ` | |
269+ | ` np.unique(x, return_inverse=True) ` | ` xp.unique_inverse(x) ` | |
270+ | ` np.unique(x, return_index=True, return_inverse=True, return_counts=True) ` | ` xp.unique_all(x) ` | |
271+ | ` np.linalg.norm(x) ` | ` xp.linalg.vector_norm(x) ` or ` xp.linalg.matrix_norm(x) ` | |
272+ | ` np.dot(a, b) ` | ` xp.matmul(a, b) ` or ` xp.vecdot(a, b) ` or ` xp.tensordot(a, b, axes=1) ` | |
273+ | ` np.vstack((a, b)) ` | ` xp.concat((a, b), axis=0) ` | |
274+ | ` np.row_stack(...) ` | ` xp.concat((a, b), axis=0) ` | |
275+ | ` np.hstack((a, b)) ` | ` xp.concat((a, b), axis=1) ` | |
276+ | ` np.column_stack((a, b)) ` | ` xp.concat(...) ` | Use with ` xp.reshape ` to ensure 2-D |
277+ | ` np.dstack((a, b)) ` | ` xp.concat((a, b), axis=2) ` | |
278+ | ` np.trace(x) ` | ` xp.linalg.trace(x) ` | |
279+ | ` np.diagonal(x) ` | ` xp.linalg.diagonal(x) ` | |
280+ | ` np.cross(a, b) ` | ` xp.linalg.cross(a, b) ` | |
281+ | ` np.outer(a, b) ` | ` xp.linalg.outer(a, b) ` | |
282+ | ` np.matmul(a, b) ` | ` xp.linalg.matmul(a, b) ` or ` xp.matmul(a, b) ` | |
283+ | ` np.ravel ` | ` xp.reshape(x, (-1,)) ` | |
284+ | ` x.flatten ` | ` xp.reshape(x, (-1,)) ` | |
285+
286+ ### PyTorch
287+
288+ | PyTorch API | Array API | Notes |
289+ | --- | --- | --- |
290+ | ` torch.transpose(x, dim0, dim1) ` | ` xp.permute_dims(x, axes) ` | |
291+ | ` torch.permute(x, dims) ` | ` xp.permute_dims(x, axes) ` | |
292+ | ` torch.cat(...) ` | ` xp.concat(...) ` | |
293+ | ` torch.absolute(x) ` | ` xp.abs(x) ` | |
294+ | ` torch.clamp(x, min, max) ` | ` xp.clip(x, min, max) ` | |
295+ | ` torch.bitwise_not(x) ` | ` xp.bitwise_invert(x) ` | |
296+ | ` torch.arcsin(x) ` | ` xp.asin(x) ` | |
297+ | ` torch.arccos(x) ` | ` xp.acos(x) ` | |
298+ | ` torch.arctan(x) ` | ` xp.atan(x) ` | |
299+ | ` torch.arctan2(y, x) ` | ` xp.atan2(y, x) ` | |
300+ | ` torch.arcsinh(x) ` | ` xp.asinh(x) ` | |
301+ | ` torch.arccosh(x) ` | ` xp.acosh(x) ` | |
302+ | ` torch.arctanh(x) ` | ` xp.atanh(x) ` | |
303+ | ` torch.tensor ` | ` xp.asarray ` | |
304+ | ` x.astype(dtype) ` | ` xp.astype(x, dtype) ` | |
305+ | ` torch.unique(x) ` | ` xp.unique_values(x) ` | |
306+ | ` torch.unique(x, return_counts=True) ` | ` xp.unique_counts(x) ` | |
307+ | ` torch.unique(x, return_inverse=True) ` | ` xp.unique_inverse(x) ` | |
308+ | ` torch.unique(x, return_index=True, return_inverse=True, return_counts=True) ` | ` xp.unique_all(x) ` | |
309+ | ` torch.linalg.norm(x) ` | ` xp.linalg.vector_norm(x) ` or ` xp.linalg.matrix_norm(x) ` | |
310+ | ` torch.dot(a, b) ` | ` xp.matmul(a, b) ` or ` xp.vecdot(a, b) ` or ` xp.tensordot(a, b, axes=1) ` | |
311+ | ` torch.vstack((a, b)) ` | ` xp.concat((a, b), axis=0) ` | |
312+ | ` torch.hstack((a, b)) ` | ` xp.concat((a, b), axis=1) ` | |
313+ | ` torch.dstack((a, b)) ` | ` xp.concat((a, b), axis=2) ` | |
314+ | ` torch.trace(x) ` | ` xp.linalg.trace(x) ` | |
315+ | ` torch.diagonal(x) ` | ` xp.linalg.diagonal(x) ` | |
316+ | ` torch.cross(a, b) ` | ` xp.linalg.cross(a, b) ` | |
317+ | ` torch.outer(a, b) ` | ` xp.linalg.outer(a, b) ` | |
0 commit comments