Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Devel #2852

Merged
merged 124 commits into from
Dec 15, 2023
Merged

Devel #2852

Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
124 commits
Select commit Hold shift + click to select a range
4c22a0a
requirements: update beartype requirement from <0.15.0 to <0.16.0 (#2…
dependabot[bot] Jul 27, 2023
2890038
Fix/input port combine (#2755)
jdcpni Jul 29, 2023
26d1386
Feat/em composition refactor learning mech (#2754)
jdcpni Jul 30, 2023
70f0c57
requirements: update dill requirement from <0.3.7 to <0.3.8 (#2743)
dependabot[bot] Jul 31, 2023
f9ddb0c
tests/ParameterEstimationComposition: Provide expected result instead…
jvesely Jun 28, 2023
3218ef9
tests/ParameterEstimationComposition: Reduce the number of estimates …
jvesely Jul 28, 2023
c49c015
tests/ParameterEstimationComposition: Reduce running time (#2753)
jvesely Jul 31, 2023
e3887fc
Merge remote-tracking branch 'origin/devel'
kmantel Aug 1, 2023
8355ebb
Merge pull request #2758 from kmantel/master-devel
kmantel Aug 2, 2023
0782d49
requirements: update numpy requirement to allow 1.24.4 (#2759)
jvesely Aug 2, 2023
b1039b4
llvm, OptimizationControlMechanism: Change "input port" -> "output po…
jvesely Jul 3, 2023
0a817aa
llvm, OptimizationControlMechanism: Use more descriptive names for in…
jvesely Jul 3, 2023
1ff3db8
llvm, OptimizationControlMechanism: Execute OCM output ports when ass…
jvesely Jul 3, 2023
66aabcf
tests/control: Wrap composition run to return results and grid search…
jvesely Aug 1, 2023
8a2e2e0
llvm, OptimizationControlMechanism: Reuse calculated costs from Trans…
jvesely Aug 1, 2023
8eca274
tests/control: Consolidate model_based_ocm_{after,before} tests
jvesely Aug 1, 2023
84b9a72
llvm, OptimizationControlMechanism: Reuse costs computed by TransferW…
jvesely Aug 3, 2023
2ba0668
llvm/codegen: Use a new variable name for list of rval operands
jvesely Jun 6, 2023
79ba53b
llvm/execution: Use typing to annotate parameter types
jvesely Jun 6, 2023
9a9ef68
llvm: Type annotation fixes (#2762)
jvesely Aug 3, 2023
3d1ec81
Fix/composition existing projections (#2763)
jdcpni Aug 4, 2023
35c6e0a
llvm, GridSearch: Use get_random_state_ptr helper
jvesely Aug 6, 2023
0398894
llvm/Component: Do not include read-only parameters with custom gette…
jvesely Aug 7, 2023
630f03d
llvm/Component: Drop more parameters from the compiled structures
jvesely Aug 6, 2023
699dfa4
llvm, TransferWithCosts: Use get_state_space instead of get_state_ptr…
jvesely Aug 8, 2023
95cd326
llvm: Reduce size of compiled parameters (#2765)
jvesely Aug 8, 2023
8732aaa
requirements: update optuna requirement from <3.3.0 to <3.4.0 (#2764)
dependabot[bot] Aug 8, 2023
5c7a90f
Feat/backprop fct with multi args (#2766)
jdcpni Aug 9, 2023
693c336
[skip ci] (#2770)
jdcpni Aug 11, 2023
2c59d26
tests/Distance: Consolidate
jvesely Aug 8, 2023
28d3835
Functions/Distance: Do not exit early for for COSINE and CORRELATION …
jvesely Aug 8, 2023
ab973f2
llvm, Distance: Turn "NORMALIZE" into compiled parameter
jvesely Aug 8, 2023
071702f
llvm, Distance: Turn "NORMALIZE" into compiled parameter (#2772)
jvesely Aug 16, 2023
233a0ce
tests/TransferWithCosts: Add missing 'function' mark (#2773)
jvesely Aug 16, 2023
d2a72b9
tests: Add 'pytorch' mark to all 'PyTorch' execution mode variants (#…
jvesely Aug 17, 2023
7260d59
requirements: update grpcio requirement from <1.57.0 to <1.58.0 (#2768)
dependabot[bot] Aug 17, 2023
082040c
Feat/em composition new (#2771)
jdcpni Aug 17, 2023
ff5df04
Feat/integrators/integrator mech reset param (#2778)
jdcpni Aug 18, 2023
a615dc6
setup.cfg: Restore parallel execution of tests by default (#2780)
jvesely Aug 20, 2023
122acf9
composition: Remove incorrect warning (#2781)
jvesely Aug 20, 2023
0daed25
Feat/models/ego mdp (#2782)
jdcpni Aug 20, 2023
94af56d
tests/em_composition: Remove unused benchmark fixtures (#2783)
jvesely Aug 21, 2023
9aa16a1
ci, docs: Use Python 3.11 to generate online docs (#2785)
jvesely Aug 22, 2023
d0ba283
Feat/models/ego mdp (#2787)
jdcpni Aug 25, 2023
81df7b3
Feat/models/ego mdp (#2789)
jdcpni Aug 29, 2023
9edacc3
github-actions(deps): bump actions/checkout from 3 to 4 (#2794)
dependabot[bot] Sep 5, 2023
9c75155
requirements: update pytest requirement from <7.4.1 to <7.4.2 (#2793)
dependabot[bot] Sep 5, 2023
f4d0735
requirements: update pytest requirement from <7.4.2 to <7.4.3 (#2796)
dependabot[bot] Sep 8, 2023
6024762
requirements: update grpcio requirement from <1.58.0 to <1.59.0 (#2797)
dependabot[bot] Sep 10, 2023
f4cb9b4
Feat/learning nested (#2801)
jdcpni Sep 18, 2023
9bf3cae
Feat/learning nested (#2802)
jdcpni Sep 18, 2023
03a6193
requirements: update pandas requirement from <2.0.4 to <2.1.1 (#2792)
dependabot[bot] Sep 20, 2023
716046d
deps: Don't use onnxruntime==1.16 on python 3.11 (#2807)
jvesely Sep 26, 2023
893c3eb
requirements: update llvmlite requirement from <0.41 to <0.42 (#2806)
dependabot[bot] Sep 26, 2023
ca2a82e
requirements: update grpcio requirement from <1.59.0 to <1.60.0 (#2809)
dependabot[bot] Oct 4, 2023
c24135b
Feat/learn nested direct (#2812)
jdcpni Oct 11, 2023
3490214
requirements: update fastkde requirement (#2813)
dependabot[bot] Oct 16, 2023
c00173e
ci/codeql: Reduce disk space usage(#2817)
jvesely Oct 17, 2023
204ac0d
llvm: Add support for fp32 to printf helper (#2816)
jvesely Oct 17, 2023
c283c58
requirements: update pillow requirement from <10.1.0 to <10.2.0 (#2815)
dependabot[bot] Oct 17, 2023
1c74859
LogEntries: Do not store references to owner's owner (#2819)
jvesely Oct 19, 2023
aa71055
requirements: update networkx requirement from <3.2 to <3.3 (#2820)
dependabot[bot] Oct 20, 2023
16b8ff1
deps: Bump minimum version of modeci_mdf to 0.4.3
jvesely Oct 18, 2023
4f59709
deps: Bump minimum version of pytorch to 1.10.0
jvesely Oct 19, 2023
8160d4d
deps: Bump minimum version of numpy to 1.21.0
jvesely Oct 19, 2023
76eeef5
ci/ga: Add a CI run with version restricted dependencies
jvesely Oct 18, 2023
0d1865e
ci/github-actions: Add CI run using the lowest supported version of d…
jvesely Oct 20, 2023
9c827ff
requirements: update pytest requirement from <7.4.3 to <7.4.4 (#2822)
dependabot[bot] Oct 25, 2023
6bd555f
tests/MemoryFunctions: Use more accurate and descriptive expected res…
jvesely Oct 22, 2023
2e42c24
DictionaryMemory: Store key after applying noise
jvesely Oct 22, 2023
3f3c531
llvm, DictionaryMemory: Implement noise application to key before sto…
jvesely Oct 22, 2023
d863b30
llvm, DictionaryMemory: Apply 'rate' value before storing 'key'
jvesely Oct 22, 2023
c46c7b8
tests/MemoryFunctions: Add extra insertion to test 'duplicate_keys'
jvesely Oct 27, 2023
bc409f2
llvm, MemoryFunctions: Drop 'duplicate_keys' and 'previous_value' fro…
jvesely Oct 27, 2023
87fd1a9
llvm, MemoryFunctions: Implement, use or drop unused parameters (#2824)
jvesely Nov 3, 2023
688021c
Refactor/learning pathways using ports (#2827)
jdcpni Nov 4, 2023
4eb8b64
Fix/emcomposition learnable projections (#2828)
jdcpni Nov 5, 2023
8150aca
requirements: Update torch requirement from >=1.10.0,<2.1.0 to >=1.10…
jvesely Nov 6, 2023
0818baa
setup.cfg: Restore parallel execution of tests by default (#2831)
jvesely Nov 6, 2023
77557ad
requirements: update pandas requirement from <2.1.1 to <2.1.3 (#2823)
dependabot[bot] Nov 7, 2023
8bc8bf7
Feat/emcomposition/support storage function (#2833)
jdcpni Nov 10, 2023
ca7e9d7
Test/autodiff without torch (#2836)
jdcpni Nov 11, 2023
ddbaef1
requirements: update pandas requirement from <2.1.3 to <2.1.4 (#2835)
dependabot[bot] Nov 12, 2023
f2b8141
github-actions(deps): bump actions/github-script from 6 to 7 (#2841)
dependabot[bot] Nov 14, 2023
7a82b3e
requirements: update pytest-xdist requirement (#2840)
dependabot[bot] Nov 14, 2023
e32223a
tests/functions: Move test_Stability_squeezes_variable to test_stability
jvesely Oct 20, 2023
cca6031
Functions/Stability: Convert 'normalize' to FunctionParameter
jvesely Nov 12, 2023
b61f894
Functions/Stability: Convert 'normalize' to FunctionParameter (#2838)
jvesely Nov 14, 2023
a947643
Feat/log/nparray executions (#2844)
jdcpni Nov 15, 2023
8843a6c
SoftMax: Switch default one_hot_function to 'None' (#2837)
jvesely Nov 15, 2023
b46308c
broken_trans_deps: Add cattrs==23.2.{1,2} to broken deps list (#2849)
jvesely Nov 28, 2023
8293201
requirements: update pytest-xdist requirement (#2848)
dependabot[bot] Nov 28, 2023
01ecccb
ci/github-actions: Run the main CI job every day (#2850)
jvesely Nov 28, 2023
f7f91b6
llvm, TransferFunction: Remove stale comment
jvesely Nov 30, 2023
3ab96b0
llvm, TransferFunction: Assert that 'max' termination measure does no…
jvesely Nov 27, 2023
53078b5
llvm, Component: Drop 'initializer' from compiled parameters
jvesely Nov 27, 2023
6f9c9e8
llvm, Component: Drop OCM functions from compiled parameters
jvesely Nov 28, 2023
b85e8c2
llvm, Component: Drop 'sample' and 'target' from compiled parameters
jvesely Nov 28, 2023
a133e7a
llvm, Component: Drop 'search_space' from compiled parameters
jvesely Nov 28, 2023
3b3703e
llvm, Mechanism: Drop `_parameter_ports' from compiled structures if …
jvesely Nov 28, 2023
fae7d5c
llvm, Component: Drop 'integrator_function' from compiled structures …
jvesely Nov 30, 2023
cf29608
llvm, Component: Drop 'random_state' from DDM's compiled state if it'…
jvesely Dec 2, 2023
e9dc9af
llvm, Component: Use 'add' to insert single element to a set
jvesely Dec 2, 2023
06a9495
llvm, Component: Drop unused cost functions from compiled structures
jvesely Dec 2, 2023
354b065
llvm, Function, Mechanism: Track used parameters and state
jvesely Nov 16, 2023
62c7084
llvm: Minimize and track parameters used in compiled structures (#285)
jvesely Dec 5, 2023
cf2ab2f
Graph: store cycle_vertices as list of cycles
kmantel Dec 1, 2023
8496390
Composition: node roles: determine INTERNAL by not being INPUT/OUTPUT
kmantel Nov 18, 2023
cefe9a1
Composition: node roles: split use of comp graph and scheduler graph
kmantel Nov 10, 2023
29790e8
Merge pull request #2853 from kmantel/noderoles
kmantel Dec 7, 2023
68aa99b
Composition: correct rebuilding scheduler on graph change (#2856)
kmantel Dec 8, 2023
a6631bd
github-actions(deps): bump actions/setup-python from 4 to 5 (#2854)
dependabot[bot] Dec 9, 2023
9123666
requirements: update pandas requirement from <2.1.4 to <2.1.5 (#2857)
dependabot[bot] Dec 9, 2023
b7cf26c
Models/ego/use emcomposition (#2861)
jdcpni Dec 12, 2023
70ae5c4
Merge master into devel (#2860)
kmantel Dec 13, 2023
80e2586
Merge master into devel
kmantel Dec 13, 2023
1b49504
Merge pull request #2862 from kmantel/master-devel
kmantel Dec 14, 2023
1a880a6
condition: correct Condition class mro dependencies
kmantel Sep 2, 2023
5cfbc78
scheduling: handle empty graph-scheduler object docstring
kmantel Dec 15, 2023
647bee7
Scheduler: use ConditionSet to store user-specified conds
kmantel Nov 20, 2023
ecb1d59
utilities: add toposort_key
kmantel Sep 2, 2023
1869c2e
condition: support graph-scheduler graph structure conditions
kmantel Aug 5, 2023
a38768d
Merge pull request #2864 from kmantel/structural-conditions
kmantel Dec 15, 2023
5a21b89
github-actions(deps): bump github/codeql-action from 2 to 3 (#2863)
dependabot[bot] Dec 15, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Feat/models/ego mdp (#2787)
• pytorchcomponents.py
  - collate_afferents: fix accomodation of mech with multiple input_ports and function that takes > 1 argument (e.g., LinearCombination)
  - pytorch_function_creator: refactor linearcombination implementation

• test_learning.py
  - test_backprop_fct_with_2_inputs_to_linear_combination_product and test_backprop_fct_with_3_inputs_to_linear_combination_product: include auto with ExecutionMode.PyTorch

• EGO Model - MDP
  - refactor to not use counter or response layers

• composition.py
  - _parse_receiver_spec(): modify error message for missing receiver to include name of offending Projection
  - _parse_receiver_spec(): modify error message for missing receiver to include name of offending Projection
  - test_composition.py
    - TestAddProjection:
      - restore test for parallel projections between two nodes
      - modify error msg for receiver not specified
      - add test for received node not in comp

* • memoryfunctions.py
  ContentAddressableMemory._validate(): modify error message for scalar fields

---------

Co-authored-by: jdcpni <pniintel55>
  • Loading branch information
jdcpni authored Aug 25, 2023
commit d0ba283e5d060caf597976d39619f084e728d453
186 changes: 74 additions & 112 deletions Scripts/Models (Under Development)/EGO/EGO Model - MDP.py

Large diffs are not rendered by default.

Binary file not shown.
Original file line number Diff line number Diff line change
Expand Up @@ -2279,10 +2279,11 @@ def __init__(self,
prefs: Optional[ValidPrefSet] = None):

default_variable = default_variable if default_variable is not None else [[0], [0], [0]]
error_matrix = np.zeros((len(default_variable[LEARNING_ACTIVATION_OUTPUT]),
len(default_variable[LEARNING_ERROR_OUTPUT])))

# self.return_val = ReturnVal(None, None)
try:
error_matrix = np.zeros((len(default_variable[LEARNING_ACTIVATION_OUTPUT]),
len(default_variable[LEARNING_ERROR_OUTPUT])))
except IndexError:
error_matrix = None

super().__init__(
default_variable=default_variable,
Expand Down
11 changes: 8 additions & 3 deletions psyneulink/core/components/functions/stateful/memoryfunctions.py
Original file line number Diff line number Diff line change
Expand Up @@ -1258,9 +1258,14 @@ def _validate(self, context=None):
if (isinstance(distance_function, Distance)
and distance_function.metric == COSINE
and any([len(v)==1 for v in test_var])):
warnings.warn(f"{self.__class__.__name__} is using {distance_function} with metric=COSINE and has "
f"at least one memory field that is a scalar (i.e., size=1), which will always produce "
f"a distance of 0 (the angle of scalars is not defined).")
fields_nums_msg = [str(i) for i,v in enumerate(test_var) if len(v)==1]
if len(fields_nums_msg) == 1:
fields_nums_msg = f"and memory field {fields_nums_msg[0]} that is a scalar; this will"
else:
fields_nums_msg = f"with memory fields {' ,'.join(fields_nums_msg)} that are scalars, " \
f"each of which will "
warnings.warn(f"{self.componentName} is using {distance_function.componentName} with metric=COSINE "
f"{fields_nums_msg} always produce a distance of 0 (since angle of scalars is not defined).")

field_wts_homog = np.full(len(test_var),1).tolist()
field_wts_heterog = np.full(len(test_var),range(0,len(test_var))).tolist()
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1045,9 +1045,9 @@ class Parameters(ModulatoryMechanism_Base.Parameters):
:type: ``list``
:read only: True
"""
# variable = Parameter(np.array([[0],[0],[0]]),
# pnl_internal=True,
# constructor_argument='default_variable')
variable = Parameter(np.array([[0],[0]]),
pnl_internal=True,
constructor_argument='default_variable')
function = Parameter(BackPropagation, stateful=False, loggable=False)
covariates_sources = Parameter(None, stateful=False, structural=True, read_only=True)
error_sources = Parameter(None, stateful=False, structural=True, read_only=True)
Expand Down Expand Up @@ -1154,32 +1154,25 @@ def _check_type_and_timing(self):
repr(LEARNING_TIMING)))

def _parse_function_variable(self, variable, context=None):
function_variable = np.zeros_like(variable[np.array([ACTIVATION_INPUT_INDEX,
ACTIVATION_OUTPUT_INDEX,
ERROR_SIGNAL_INDEX])])
function_variable[ACTIVATION_INPUT_INDEX] = variable[ACTIVATION_INPUT_INDEX]
function_variable[ACTIVATION_OUTPUT_INDEX] = variable[ACTIVATION_OUTPUT_INDEX]
function_variable[ERROR_SIGNAL_INDEX] = variable[ERROR_SIGNAL_INDEX]
return function_variable
# Return values of ACTIVATION_INPUT_INDEX, ACTIVATION_OUTPUT_INDEX, and first ERROR_SIGNAL_INDEX InputPorts
# in variable; remaining inputs (additional error signals and/or COVARITES) are passed in kwargs)
return variable[range(min(len(self.input_ports),3))]

def _validate_variable(self, variable, context=None):
"""Validate that variable has exactly three items: activation_input, activation_output and error_signal
"""

variable = super()._validate_variable(variable, context)

if len(variable) < 3:
raise LearningMechanismError("Variable for {} ({}) must have at least three items ({}, {}, and {}{})".
format(self.name, variable,
ACTIVATION_INPUT,
ACTIVATION_OUTPUT,
ERROR_SIGNAL,"(s)"))
# if len(variable) < 3:
num_input_ports = len(self.input_ports)
error_signals_msg = f", and 'ERROR_SIGNAL(s)'" if num_input_ports > 2 else ""
if len(variable) < num_input_ports:
raise LearningMechanismError(f"Variable for {self.name} ({variable}) must have at least {num_input_ports} "
f"items: 'ACTIVATION_INPUT', 'ACTIVATION_INPUT'{error_signals_msg}")

# Validate that activation_input, activation_output are numeric and lists or 1d np.ndarrays and that
# Validate that activation_input, activation_output are numeric and lists or 1d arrays and that
# there is the correct number of items beyond those for the number of error_sources and covariates_sources

assert ASSERT, "ADD TEST FOR LEN OF VARIABLE AGAINST NUMBER OF ERROR_SIGNALS AND COVARIATES"

for i in range(len(variable)):
item_num_string = "Item {i+1}"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -197,7 +197,7 @@ class Parameters(ProcessingMechanism_Base.Parameters):
:type: 'list or np.ndarray'
"""
function = Parameter(AdaptiveIntegrator(rate=0.5), stateful=False, loggable=False)
reset = Parameter([0], modulable=True, stateful=True, constructor_argument='reset_default')
reset = Parameter([0], modulable=True, constructor_argument='reset_default')

#
@check_user_specified
Expand Down
34 changes: 24 additions & 10 deletions psyneulink/core/compositions/composition.py
Original file line number Diff line number Diff line change
Expand Up @@ -6390,9 +6390,9 @@ def _parse_receiver_spec(self, projection, receiver, sender, learning_projection
if hasattr(projection, "receiver"):
receiver = projection.receiver.owner
else:
raise CompositionError("For a Projection to be added to a Composition, a receiver must be specified, "
"either on the Projection or in the call to Composition.add_projection(). {}"
" is missing a receiver specification. ".format(projection.name))
raise CompositionError(f"'{projection.name}' is missing a receiver specification. For a Projection "
f"to be added to a Composition, a receiver must be specified either on the "
f"Projection or in the call to Composition.add_projection().")

# initialize all receiver-related variables
graph_receiver = receiver_mechanism = receiver_input_port = receiver
Expand Down Expand Up @@ -6448,13 +6448,12 @@ def _parse_receiver_spec(self, projection, receiver, sender, learning_projection
if receiver is None:
# raise CompositionError(f"receiver arg ({repr(receiver_arg)}) in call to add_projection method of "
# f"{self.name} is not in it or any of its nested {Composition.__name__}s.")
if isinstance(receiver_arg, Port):
receiver_str = f"{receiver_arg} of {receiver_arg.owner}"
else:
receiver_str = f"{receiver_arg}"
raise CompositionError(f"{receiver_str}, specified as receiver of {Projection.__name__} from "
f"{sender.name}, is not in {self.name} or any {Composition.__name__}s nested "
f"within it.")
receiver_str = f"{receiver_arg} of {receiver_arg.owner}" \
if isinstance(receiver_arg, Port) else f"{receiver_arg.name}"
proj_name = f"'{projection.name}'" if isinstance(projection.name, str) else Projection.__name__
raise CompositionError(
f"'{receiver_str}', specified as receiver of '{proj_name}' from '{sender.name}', "
f"is not in '{self.name}' or any {Composition.__name__}s nested within it.")

return receiver, receiver_mechanism, graph_receiver, receiver_input_port, \
nested_compositions, learning_projection
Expand Down Expand Up @@ -8564,6 +8563,20 @@ def _get_acts_in_out_cov(input_source, output_source, learned_projection)->List[
covariates_sources = _get_covariate_info(output_source, learned_projection)
# activation_output is always a single value since activation function is assumed to have only one output
activation_output = [output_source.output_ports[0].value]
# insure that output_source.function.derivative can handle covariates
if covariates_sources:
try:
output_source.function.derivative(input=None, output=activation_output,
covariates=[source.variable for source in covariates_sources])
except TypeError as error:
if "derivative() got an unexpected keyword argument 'covariates'" in error.args[0]:
raise CompositionError(
f"'{output_source.name}' in '{self.name}' has more than one input_port, "
f"but the derivative of its function ({output_source.function.componentName}) "
f"cannot handle covariates required to determine the partial derivatives of "
f"each input in computing the gradients for Backpropagation; use a function "
f"(such as LinearCombination) that handles more than one argument, "
f"or remove the extra input_ports.")
return [activation_input, activation_output, covariates_sources]

# Get existing LearningMechanism if one exists (i.e., if this is a crossing point with another pathway)
Expand Down Expand Up @@ -8594,6 +8607,7 @@ def _get_acts_in_out_cov(input_source, output_source, learned_projection)->List[
activation_input, activation_output, covariates_sources = _get_acts_in_out_cov(input_source,
output_source,
learned_projection)

# Use only one error_signal_template for learning_function, since it gets only one source of error at a time
learning_function = BackPropagation(default_variable=activation_input +
activation_output +
Expand Down
24 changes: 6 additions & 18 deletions psyneulink/library/compositions/emcomposition.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,7 @@
# - CHECK FOR EXISTING LM ASSERT IN pytests
#
# - AutodiffComposition:
# - Check that error occurs for adding a controller to an AutodiffComposition
# - Check that if "epochs" is not in input_dict for Autodiff, then:
# - set to num_trials as default,
# - leave it to override num_trials if specified (add this to DOCUMENTATION)
Expand Down Expand Up @@ -160,33 +161,20 @@
# - finish adding derivative (for if exponents are specified)
# - remove properties (use getter and setter for Parameters)
#
# - ContentAddressableMemory Function:
# - rename "cue" -> "query"
# - add field_weights as parameter of EM, and make it a shared_parameter ?as well as a function_parameter?

# - DDM:
# - make reset_stateful_function_when a Parameter and arg in constructor
# and align with reset Parameter of IntegratorMechanism)
#
# - FIX: BUGS:
# - Composition:
# - pathways arg: the following should treat simple_mech as an INPUT node but it doesn't
# c = Composition(pathways=[[input,ctl],[simple_mech]])
# - parsing of input dict in constructor:
# improve error message, though the following attempt in XXX causes errors:
# try:
# inputs, num_inputs_sets = self._parse_run_inputs(inputs, context)
# except:
# raise CompositionError(f"PROGRAM ERROR: Unexpected problem parsing inputs in run() for {self.name}.")
#
# -LearningMechanism / Backpropagation LearningFunction:
# - Construction of LearningMechanism on its own fails; e.g.:
# lm = LearningMechanism(learning_rate=.01, learning_function=BackPropagation())
# causes the folllowing error:
# causes the following error:
# TypeError("Logistic.derivative() missing 1 required positional argument: 'self'")
# - ContentAddressableMemory Function:
# - insure that selection function returns only one non-zero value or, if it weights them parametrically,
# then checks against the duplicate_entries setting and warns if that is set.
# (cf LINE 1509)
# - add tests for use of softmax (and worning if no duplicate_entries is set to True)
# - rename "cue" -> "query"
# - add field_weights as parameter of EM, and make it a shared_parameter ?as well as a function_parameter?
# - Adding GatingMechanism after Mechanisms they gate fails to implement gating projections
# (example: reverse order of the following in _construct_pathways
# self.add_nodes(self.softmax_nodes)
Expand Down
33 changes: 23 additions & 10 deletions psyneulink/library/compositions/pytorchcomponents.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,14 @@

__all__ = ['PytorchMechanismWrapper', 'PytorchProjectionWrapper']

# def lincomb_product(x):
# return x[0] * x[1]

def pytorch_function_creator(function, device, context=None):
"""
Converts a PsyNeuLink function into an equivalent PyTorch lambda function.
NOTE: This is needed due to PyTorch limitations
(see: https://github.com/PrincetonUniversity/PsyNeuLink/pull/1657#discussion_r437489990)
"""

def get_fct_param_value(param_name):
val = function._get_current_parameter_value(
param_name, context=context)
Expand All @@ -34,10 +33,24 @@ def get_fct_param_value(param_name):
return lambda x: x * slope + intercept

elif isinstance(function, LinearCombination):
if get_fct_param_value('operation') == PRODUCT:
return lambda x: torch.tensor(x[0], device=device).double() * torch.tensor(x[1], device=device).double()
def linear_combination_sum(x):
result = torch.tensor([0] * len(x[0]), device=device).double()
for t in x:
result += t
return result
def linear_combination_product(x):
result = torch.tensor([1] * len(x[0]), device=device).double()
for t in x:
result *= t
return result
if function.operation == SUM:
return linear_combination_sum
elif function.operation == PRODUCT:
return linear_combination_product
else:
return lambda x: torch.tensor(x[0], device=device).double() + torch.tensor(x[1], device=device).double()
from psyneulink.library.compositions.autodiffcomposition import AutodiffCompositionError
raise AutodiffCompositionError(f"The 'operation' parameter of {function.componentName} is not supported "
f"by AutodiffComposition; use 'SUM' or 'PRODUCT' if possible.")

elif isinstance(function, Logistic):
gain = get_fct_param_value('gain')
Expand Down Expand Up @@ -90,16 +103,16 @@ def add_afferent(self, afferent):


def collate_afferents(self):
"""
Returns weight-multiplied sum of all afferent projections
"""Return weight-multiplied sum of all afferent projections for each input_port of the Mechanism
If there are multiple input_ports, return an array with the sum for each input_port
"""
# FIX: AUGMENT THIS TO SUPPORT InputPort's function
# return sum((proj.execute(proj.sender.value) for proj in self.afferents))
if len(self._mechanism.input_ports) == 1:
return sum((proj.execute(proj.sender.value) for proj in self.afferents))
else:
return [sum(proj.execute(proj.sender.value) for proj in input_port.path_afferents)
for input_port in self._mechanism.input_ports]
# Sum projections to each input_port of the Mechanism and return array with the sums
return [sum(proj.execute(proj.sender.value) for proj in self.afferents if proj._projection in
input_port.path_afferents) for input_port in self._mechanism.input_ports ]

def execute(self, variable):
self.value = self.function(variable)
Expand Down
33 changes: 33 additions & 0 deletions psyneulink/library/compositions/pytorchmodelcreator.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,43 @@ def __init__(self, composition, device, context=None):

self.params = nn.ParameterList()
self.device = device

self._composition = composition
# # FIX: FOR USE IN SUPPORT OF NESTED AUTOCOMPOSITIONS:
# from psyneulink.core.compositions.composition import Composition
# # First, if the composition has any nested compositions, flatten it.
# if any(isinstance(node, Composition) for node in composition.nodes):
# self._composition = composition.flatten()
# else:
# self._composition = composition


# FIX: FLATTEN PNL COMPOSITION HERE:
# - CREATE A NEW SCHEDULER FOR FLATTENED COMPOSITION:
# flattened_comp_scheduler = graph_scheduler.Scheduler(flattened_composition)
# flattened_comp_scheduler.add_condition_set(outer_composition.scheduler._user_specified_conds)
# flattened_comp_scheduler.add_condition_set(nested_composition.scheduler._user_specified_conds) <- CAN'T
# INCLUDE TIME-BASED
# REMOVE NESTED COMP FROM OUTER COMPOSITION SCHEDULER
# ON THE OUTER COMP SCHED THAT REFERENCES OR IS FOR THE NESTED COMP DEAL WIH THAT
# USES: check for all dependencies of all outer_comp.scheduler.conditions for any on nest_comp,
# recursing for All() and Any()
# OWNS: if nested_comp in outer_comp.scheduler.conditions
# FANCIER: TRANSFER CONDITIONS THAT REFEFERENCE THE NESTED COMPOSITION TO ITS INPUT (?BEFORE) / OUTPUT (
# AFTER) NODES

# Instantiate pytorch mechanisms
for node in set(composition.nodes) - set(composition.get_nodes_by_role(NodeRole.LEARNING)):
# FIX: ADD SUPPORT FOR NESTED AUTODIFFCOMPOSITION(S) HERE:
# - WRITE FLATTEN METHOD, WHICH MUST:
# - PRECLUDE CONTROLLERS AT ANY LEVEL OF NESTING
# - CALL ITSELF RECURSIVELY FOR ALL LEVELS OF NESTING
# - CREATE MAP OF PROJECTIONS IN FLATTENED VERSION TO input/output_CIM PROJECTIONS OF NESTED COMPS
# - CREATE NEW SCHEDULER FOR FLATTENED COMP (FOR USE BY AUTODIFF) (see LINES 76-90 BELOW)
# - MAKE SURE ALL NESTED COMPS ARE AUTODIFF-COMPLIANT
# - CALL Composition.flatten()
# - IN update_parameters, MAP PYTORCH PARAMETERS BACK TO input/output_CIM PROJECTIONS OF NESTED COMPS

pytorch_node = PytorchMechanismWrapper(node,
self._composition._get_node_index(node),
device,
Expand Down
Loading