Reference Guide  2.5.0
psyclone.transformations.MarkRoutineForGPUMixin Class Reference
Inheritance diagram for psyclone.transformations.MarkRoutineForGPUMixin:

Public Member Functions

def validate_it_can_run_on_gpu (self, node, options)
 

Detailed Description

 This Mixin provides the "validate_it_can_run_on_gpu" method that
given a routine or kernel node, it checks that the callee code is valid
to run on a GPU. It is implemented as a Mixin because transformations
from multiple programming models, e.g. OpenMP and OpenACC, can reuse
the same logic.

Definition at line 357 of file transformations.py.

Member Function Documentation

◆ validate_it_can_run_on_gpu()

def psyclone.transformations.MarkRoutineForGPUMixin.validate_it_can_run_on_gpu (   self,
  node,
  options 
)
Check that the supplied node can be marked as available to be
called on GPU.

:param node: the kernel or routine to validate.
:type node: :py:class:`psyclone.psyGen.Kern` |
            :py:class:`psyclone.psyir.nodes.Routine`
:param options: a dictionary with options for transformations.
:type options: Optional[Dict[str, Any]]
:param bool options["force"]: whether to allow routines with
    CodeBlocks to run on the GPU.

:raises TransformationError: if the node is not a kernel or a routine.
:raises TransformationError: if the target is a built-in kernel.
:raises TransformationError: if it is a kernel but without an
                             associated PSyIR.
:raises TransformationError: if any of the symbols in the kernel are
                             accessed via a module use statement.
:raises TransformationError: if the kernel contains any calls to other
                             routines.

Definition at line 365 of file transformations.py.

365  def validate_it_can_run_on_gpu(self, node, options):
366  '''
367  Check that the supplied node can be marked as available to be
368  called on GPU.
369 
370  :param node: the kernel or routine to validate.
371  :type node: :py:class:`psyclone.psyGen.Kern` |
372  :py:class:`psyclone.psyir.nodes.Routine`
373  :param options: a dictionary with options for transformations.
374  :type options: Optional[Dict[str, Any]]
375  :param bool options["force"]: whether to allow routines with
376  CodeBlocks to run on the GPU.
377 
378  :raises TransformationError: if the node is not a kernel or a routine.
379  :raises TransformationError: if the target is a built-in kernel.
380  :raises TransformationError: if it is a kernel but without an
381  associated PSyIR.
382  :raises TransformationError: if any of the symbols in the kernel are
383  accessed via a module use statement.
384  :raises TransformationError: if the kernel contains any calls to other
385  routines.
386  '''
387  force = options.get("force", False) if options else False
388 
389  if not isinstance(node, (Kern, Routine)):
390  raise TransformationError(
391  f"The {type(self).__name__} must be applied to a sub-class of "
392  f"Kern or Routine but got '{type(node).__name__}'.")
393 
394  # If it is a kernel call it must have an accessible implementation
395  if isinstance(node, BuiltIn):
396  raise TransformationError(
397  f"Applying {type(self).__name__} to a built-in kernel is not "
398  f"yet supported and kernel '{node.name}' is of type "
399  f"'{type(node).__name__}'")
400 
401  if isinstance(node, Kern):
402  # Get the PSyIR routine from the associated kernel. If there is an
403  # exception (this could mean that there is no associated tree
404  # or that the frontend failed to convert it into PSyIR) reraise it
405  # as a TransformationError
406  try:
407  kernel_schedule = node.get_kernel_schedule()
408  except Exception as error:
409  raise TransformationError(
410  f"Failed to create PSyIR for kernel '{node.name}'. "
411  f"Cannot transform such a kernel.") from error
412  k_or_r = "Kernel"
413  else:
414  # Supplied node is a PSyIR Routine which *is* a Schedule.
415  kernel_schedule = node
416  k_or_r = "routine"
417 
418  # Check that the routine does not access any data that is imported via
419  # a 'use' statement.
420  # TODO #2271 - this implementation will not catch symbols from literal
421  # precisions or intialisation expressions.
422  refs = kernel_schedule.walk(Reference)
423  for ref in refs:
424  if ref.symbol.is_import:
425  # resolve_type does nothing if the Symbol type is known.
426  try:
427  ref.symbol.resolve_type()
428  except SymbolError:
429  # TODO #11 - log that we failed to resolve this Symbol.
430  pass
431  if (isinstance(ref.symbol, DataSymbol) and
432  ref.symbol.is_constant):
433  # An import of a compile-time constant is fine.
434  continue
435  raise TransformationError(
436  f"{k_or_r} '{node.name}' accesses the symbol "
437  f"'{ref.symbol}' which is imported. If this symbol "
438  f"represents data then it must first be converted to a "
439  f"{k_or_r} argument using the KernelImportsToArguments "
440  f"transformation.")
441 
442  # We forbid CodeBlocks because we can't be certain that what they
443  # contain can be executed on a GPU. However, we do permit the user
444  # to override this check.
445  cblocks = kernel_schedule.walk(CodeBlock)
446  if not force:
447  if cblocks:
448  cblock_txt = ("\n " + "\n ".join(str(node) for node in
449  cblocks[0].get_ast_nodes)
450  + "\n")
451  option_txt = "options={'force': True}"
452  raise TransformationError(
453  f"Cannot safely apply {type(self).__name__} to {k_or_r} "
454  f"'{node.name}' because its PSyIR contains one or more "
455  f"CodeBlocks:{cblock_txt}You may use '{option_txt}' to "
456  f"override this check.")
457  else:
458  # Check any accesses within CodeBlocks.
459  # TODO #2271 - this will be handled as part of the checking to be
460  # implemented using the dependence analysis.
461  for cblock in cblocks:
462  names = cblock.get_symbol_names()
463  for name in names:
464  sym = kernel_schedule.symbol_table.lookup(name)
465  if sym.is_import:
466  raise TransformationError(
467  f"{k_or_r} '{node.name}' accesses the symbol "
468  f"'{sym.name}' within a CodeBlock and this symbol "
469  f"is imported. {type(self).__name__} cannot be "
470  f"applied to such a {k_or_r}.")
471 
472  calls = kernel_schedule.walk(Call)
473  for call in calls:
474  if not call.is_available_on_device():
475  call_str = call.debug_string().rstrip("\n")
476  raise TransformationError(
477  f"{k_or_r} '{node.name}' calls another routine "
478  f"'{call_str}' which is not available on the "
479  f"accelerator device and therefore cannot have "
480  f"{type(self).__name__} applied to it (TODO #342).")
481 
482 
Here is the caller graph for this function:

The documentation for this class was generated from the following file: