Run overview
swe_bench / lite / dev
Run b865fe80...ae5f
Benchmark pass rate
0%
0 of 2 tasks passed
0% pass rate means none of the benchmark tasks passed.
Passed
0
Tasks that passed
Failed
2
Tasks that failed
Total spend
$0.74
Duration 206 s
Task review
Completed tasks
2 completed tasks. Open a card only when you need logs, patch text, or scoring detail.
marshmallow-code__marshmallow-1343
marshmallow-code/marshmallow
Score
0%
Outcome
Did not pass
Task cost
$0.38
Duration
187 s
Summary
Did not pass
Not resolved by official SWE-bench grading. Fail-to-pass: 0%. Pass-to-pass: 0%.
View task details
Run metadata
Benchmark
swe_bench/lite/dev
Model
claude-sonnet-4-5-20250929
Started
Mar 31, 2026, 2:29 AM UTC
Completed
Mar 31, 2026, 2:32 AM UTC
Sandbox
d775c89c-8908-41be-af7e-7b37ac23153c
Tokens
In 25,508 / out 8,192
F2P / P2P
0% / 0%
Passed benchmark
No
Completed
omli>=1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==2.20.0) (2.2.1)
Requirement already satisfied: six>=1.5 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from python-dateutil->marshmallow==2.20.0) (1.17.0)
Requirement already satisfied: cachetools>=5.5.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (5.5.2)
Requirement already satisfied: chardet>=5.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (5.2.0)
Requirement already satisfied: colorama>=0.4.6 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (0.4.6)
Requirement already satisfied: filelock>=3.16.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (3.18.0)
Requirement already satisfied: platformdirs>=4.3.6 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (4.3.7)
Requirement already satisfied: pyproject-api>=1.8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (1.9.0)
Requirement already satisfied: typing-extensions>=4.12.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (4.15.0)
Requirement already satisfied: virtualenv>=20.29.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (20.31.1)
Requirement already satisfied: distlib<1,>=0.3.7 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from virtualenv>=20.29.1->tox->marshmallow==2.20.0) (0.3.9)
Installing collected packages: marshmallow
Attempting uninstall: marshmallow
Found existing installation: marshmallow 2.20.0
Uninstalling marshmallow-2.20.0:
Successfully uninstalled marshmallow-2.20.0
DEPRECATION: Legacy editable install of marshmallow[dev]==2.20.0 from file:///testbed (setup.py develop) is deprecated. pip 25.3 will enforce this behaviour change. A possible replacement is to add a pyproject.toml or enable --use-pep517, and use setuptools >= 64. If the resulting installation is not behaving as expected, try using --config-settings editable_mode=compat. Please consult the setuptools documentation for more information. Discussion can be found at https://github.com/pypa/pip/issues/11457
Running setup.py develop for marshmallow
Successfully installed marshmallow
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
+ git checkout 2be2d83a1a9a6d3d9b85804f3ab545cecc409bb0 tests/test_marshalling.py
Updated 0 paths from a3815758
+ git apply -v -
Checking patch tests/test_marshalling.py...
Applied patch tests/test_marshalling.py cleanly.
+ : '>>>>> Start Test Output'
+ pytest -rA tests/test_marshalling.py
ERROR: while parsing the following warning configuration:
ignore::marshmallow.warnings.ChangedInMarshmallow3Warning
This error occurred:
Traceback (most recent call last):
File "/opt/miniconda3/envs/testbed/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1918, in parse_warning_filter
category: type[Warning] = _resolve_warning_category(category_)
File "/opt/miniconda3/envs/testbed/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1956, in _resolve_warning_category
m = __import__(module, None, None, [klass])
File "/testbed/src/marshmallow/__init__.py", line 4, in <module>
from marshmallow.schema import (
ImportError: cannot import name 'Schema' from 'marshmallow.schema' (/testbed/src/marshmallow/schema.py)
+ : '>>>>> End Test Output'
+ git checkout 2be2d83a1a9a6d3d9b85804f3ab545cecc409bb0 tests/test_marshalling.py
Updated 1 path from a3815758
Open in DaytonaBenchmark context
Task input
[version 2.20.0] TypeError: 'NoneType' object is not subscriptable
After update from version 2.19.5 to 2.20.0 I got error for code like:
```python
from marshmallow import Schema, fields, validates
class Bar(Schema):
value = fields.String()
@validates('value') # <- issue here
def validate_value(self, value):
pass
class Foo(Schema):
bar = fields.Nested(Bar)
sch = Foo()
sch.validate({
'bar': 'invalid',
})
```
```
Traceback (most recent call last):
File "/_/bug_mschema.py", line 19, in <module>
'bar': 'invalid',
File "/_/env/lib/python3.7/site-packages/marshmallow/schema.py", line 628, in validate
_, errors = self._do_load(data, many, partial=partial, postprocess=False)
File "/_/env/lib/python3.7/site-packages/marshmallow/schema.py", line 670, in _do_load
index_errors=self.opts.index_errors,
File "/_/env/lib/python3.7/site-packages/marshmallow/marshalling.py", line 292, in deserialize
index=(index if index_errors else None)
File "/_/env/lib/python3.7/site-packages/marshmallow/marshalling.py", line 65, in call_and_store
value = getter_func(data)
File "/_/env/lib/python3.7/site-packages/marshmallow/marshalling.py", line 285, in <lambda>
data
File "/_/env/lib/python3.7/site-packages/marshmallow/fields.py", line 265, in deserialize
output = self._deserialize(value, attr, data)
File "/_/env/lib/python3.7/site-packages/marshmallow/fields.py", line 465, in _deserialize
data, errors = self.schema.load(value)
File "/_/env/lib/python3.7/site-packages/marshmallow/schema.py", line 588, in load
result, errors = self._do_load(data, many, partial=partial, postprocess=True)
File "/_/env/lib/python3.7/site-packages/marshmallow/schema.py", line 674, in _do_load
self._invoke_field_validators(unmarshal, data=result, many=many)
File "/_/env/lib/python3.7/site-packages/marshmallow/schema.py", line 894, in _invoke_field_validators
value = data[field_obj.attribute or field_name]
TypeError: 'NoneType' object is not subscriptable
```
Fix tests
tests/test_marshalling.py::TestUnmarshaller::test_deserialize_wrong_nested_type_with_validates_method
Regression tests
tests/test_marshalling.py::test_missing_is_falsy tests/test_marshalling.py::TestMarshaller::test_prefix tests/test_marshalling.py::TestMarshaller::test_marshalling_generator tests/test_marshalling.py::TestMarshaller::test_default_to_missing tests/test_marshalling.py::TestMarshaller::test_serialize_fields_with_load_only_param tests/test_marshalling.py::TestMarshaller::test_missing_data_are_skipped tests/test_marshalling.py::TestMarshaller::test_serialize_with_load_only_doesnt_validate tests/test_marshalling.py::TestMarshaller::test_serialize_fields_with_dump_to_param tests/test_marshalling.py::TestMarshaller::test_serialize_fields_with_dump_to_and_prefix_params tests/test_marshalling.py::TestMarshaller::test_stores_indices_of_errors_when_many_equals_true tests/test_marshalling.py::TestMarshaller::test_doesnt_store_errors_when_index_errors_equals_false tests/test_marshalling.py::TestUnmarshaller::test_extra_data_is_ignored tests/test_marshalling.py::TestUnmarshaller::test_stores_errors tests/test_marshalling.py::TestUnmarshaller::test_stores_indices_of_errors_when_many_equals_true tests/test_marshalling.py::TestUnmarshaller::test_doesnt_store_errors_when_index_errors_equals_false tests/test_marshalling.py::TestUnmarshaller::test_deserialize tests/test_marshalling.py::TestUnmarshaller::test_extra_fields tests/test_marshalling.py::TestUnmarshaller::test_deserialize_many tests/test_marshalling.py::TestUnmarshaller::test_deserialize_stores_errors tests/test_marshalling.py::TestUnmarshaller::test_deserialize_fields_with_attribute_param tests/test_marshalling.py::TestUnmarshaller::test_deserialize_fields_with_load_from_param tests/test_marshalling.py::TestUnmarshaller::test_deserialize_fields_with_dump_only_param tests/test_marshalling.py::TestUnmarshaller::test_deserialize_wrong_type_root_data tests/test_marshalling.py::TestUnmarshaller::test_deserialize_wrong_type_nested_data
Execution
Scorer detail
Not resolved by official SWE-bench grading. Fail-to-pass: 0%. Pass-to-pass: 0%.
Patch text
diff --git a/src/marshmallow/schema.py b/src/marshmallow/schema.py
index 085c509b..9396636f 100644
--- a/src/marshmallow/schema.py
+++ b/src/marshmallow/schema.py
@@ -369,600 +369,4 @@ class BaseSchema(base.SchemaABC):
)
self.extra = extra
self.context = context or {}
- self._normalize_nested_options()
- self._types_seen = set()
- self._update_fields(many=many)
-
- def __repr__(self):
- return '<{ClassName}(many={self.many}, strict={self.strict})>'.format(
- ClassName=self.__class__.__name__, self=self
- )
-
- def _postprocess(self, data, many, obj):
- if self.extra:
- if many:
- for each in data:
- each.update(self.extra)
- else:
- data.update(self.extra)
- return data
-
- @property
- def dict_class(self):
- return OrderedDict if self.ordered else dict
-
- @property
- def set_class(self):
- return OrderedSet if self.ordered else set
-
- ##### Override-able methods #####
-
- def handle_error(self, error, data):
- """Custom error handler function for the schema.
-
- :param ValidationError error: The `ValidationError` raised during (de)serialization.
- :param data: The original input data.
-
- .. versionadded:: 2.0.0
- """
- pass
-
- def get_attribute(self, attr, obj, default):
- """Defines how to pull values from an object to serialize.
-
- .. versionadded:: 2.0.0
- """
- return utils.get_value(attr, obj, default)
-
- ##### Handler decorators (deprecated) #####
-
- @classmethod
- def error_handler(cls, func):
- """Decorator that registers an error handler function for the schema.
- The function receives the :class:`Schema` instance, a dictionary of errors,
- and the serialized object (if serializing data) or data dictionary (if
- deserializing data) as arguments.
-
- Example: ::
-
- class UserSchema(Schema):
- email = fields.Email()
-
- @UserSchema.error_handler
- def handle_errors(schema, errors, obj):
- raise ValueError('An error occurred while marshalling {}'.format(obj))
-
- user = User(email='invalid')
- UserSchema().dump(user) # => raises ValueError
- UserSchema().load({'email': 'bademail'}) # raises ValueError
-
- .. versionadded:: 0.7.0
- .. deprecated:: 2.0.0
- Set the ``error_handler`` class Meta option instead.
- """
- warnings.warn(
- 'Schema.error_handler is deprecated. Set the error_handler class Meta option '
- 'instead.', category=DeprecationWarning
- )
- cls.__error_handler__ = func
- return func
-
- @classmethod
- def accessor(cls, func):
- """Decorator that registers a function for pulling values from an object
- to serialize. The function receives the :class:`Schema` instance, the
- ``key`` of the value to get, the ``obj`` to serialize, and an optional
- ``default`` value.
-
- .. deprecated:: 2.0.0
- Set the ``error_handler`` class Meta option instead.
- """
- warnings.warn(
- 'Schema.accessor is deprecated. Set the accessor class Meta option '
- 'instead.', category=DeprecationWarning
- )
- cls.__accessor__ = func
- return func
-
- ##### Serialization/Deserialization API #####
-
- def dump(self, obj, many=None, update_fields=True, **kwargs):
- """Serialize an object to native Python data types according to this
- Schema's fields.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :param bool update_fields: Whether to update the schema's field classes. Typically
- set to `True`, but may be `False` when serializing a homogenous collection.
- This parameter is used by `fields.Nested` to avoid multiple updates.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `MarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- # Callable marshalling object
- marshal = marshalling.Marshaller(prefix=self.prefix)
- errors = {}
- many = self.many if many is None else bool(many)
- if many and utils.is_iterable_but_not_string(obj):
- obj = list(obj)
-
- if self._has_processors:
- try:
- processed_obj = self._invoke_dump_processors(
- PRE_DUMP,
- obj,
- many,
- original_data=obj)
- except ValidationError as error:
- errors = error.normalized_messages()
- result = None
- else:
- processed_obj = obj
-
- if not errors:
- if update_fields:
- obj_type = type(processed_obj)
- if obj_type not in self._types_seen:
- self._update_fields(processed_obj, many=many)
- if not isinstance(processed_obj, Mapping):
- self._types_seen.add(obj_type)
-
- try:
- preresult = marshal(
- processed_obj,
- self.fields,
- many=many,
- # TODO: Remove self.__accessor__ in a later release
- accessor=self.get_attribute or self.__accessor__,
- dict_class=self.dict_class,
- index_errors=self.opts.index_errors,
- **kwargs
- )
- except ValidationError as error:
- errors = marshal.errors
- preresult = error.data
-
- result = self._postprocess(preresult, many, obj=obj)
-
- if not errors and self._has_processors:
- try:
- result = self._invoke_dump_processors(
- POST_DUMP,
- result,
- many,
- original_data=obj)
- except ValidationError as error:
- errors = error.normalized_messages()
- if errors:
- # TODO: Remove self.__error_handler__ in a later release
- if self.__error_handler__ and callable(self.__error_handler__):
- self.__error_handler__(errors, obj)
- exc = ValidationError(
- errors,
- field_names=marshal.error_field_names,
- fields=marshal.error_fields,
- data=obj,
- **marshal.error_kwargs
- )
- self.handle_error(exc, obj)
- if self.strict:
- raise exc
-
- return MarshalResult(result, errors)
-
- def dumps(self, obj, many=None, update_fields=True, *args, **kwargs):
- """Same as :meth:`dump`, except return a JSON-encoded string.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :param bool update_fields: Whether to update the schema's field classes. Typically
- set to `True`, but may be `False` when serializing a homogenous collection.
- This parameter is used by `fields.Nested` to avoid multiple updates.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `MarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- deserialized, errors = self.dump(obj, many=many, update_fields=update_fields)
- ret = self.opts.json_module.dumps(deserialized, *args, **kwargs)
- return MarshalResult(ret, errors)
-
- def load(self, data, many=None, partial=None):
- """Deserialize a data structure to an object defined by this Schema's
- fields and :meth:`make_object`.
-
- :param dict data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `UnmarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- result, errors = self._do_load(data, many, partial=partial, postprocess=True)
- return UnmarshalResult(data=result, errors=errors)
-
- def loads(self, json_data, many=None, *args, **kwargs):
- """Same as :meth:`load`, except it takes a JSON string as input.
-
- :param str json_data: A JSON string of the data to deserialize.
- :param bool many: Whether to deserialize `obj` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `UnmarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- # TODO: This avoids breaking backward compatibility if people were
- # passing in positional args after `many` for use by `json.loads`, but
- # ideally we shouldn't have to do this.
- partial = kwargs.pop('partial', None)
-
- data = self.opts.json_module.loads(json_data, *args, **kwargs)
- return self.load(data, many=many, partial=partial)
-
- def validate(self, data, many=None, partial=None):
- """Validate `data` against the schema, returning a dictionary of
- validation errors.
-
- :param dict data: The data to validate.
- :param bool many: Whether to validate `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A dictionary of validation errors.
- :rtype: dict
-
- .. versionadded:: 1.1.0
- """
- _, errors = self._do_load(data, many, partial=partial, postprocess=False)
- return errors
-
- ##### Private Helpers #####
-
- def _do_load(self, data, many=None, partial=None, postprocess=True):
- """Deserialize `data`, returning the deserialized result and a dictonary of
- validation errors.
-
- :param data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to validate required fields. If its value is an iterable,
- only fields listed in that iterable will be ignored will be allowed missing.
- If `True`, all fields will be allowed missing.
- If `None`, the value for `self.partial` is used.
- :param bool postprocess: Whether to run post_load methods..
- :return: A tuple of the form (`data`, `errors`)
- """
- # Callable unmarshalling object
- unmarshal = marshalling.Unmarshaller()
- errors = {}
- many = self.many if many is None else bool(many)
- if partial is None:
- partial = self.partial
- try:
- processed_data = self._invoke_load_processors(
- PRE_LOAD,
- data,
- many,
- original_data=data)
- except ValidationError as err:
- errors = err.normalized_messages()
- result = None
- if not errors:
- try:
- result = unmarshal(
- processed_data,
- self.fields,
- many=many,
- partial=partial,
- dict_class=self.dict_class,
- index_errors=self.opts.index_errors,
- )
- except ValidationError as error:
- result = error.data
- self._invoke_field_validators(unmarshal, data=result, many=many)
- errors = unmarshal.errors
- field_errors = bool(errors)
- # Run schema-level migration
- try:
- self._invoke_validators(unmarshal, pass_many=True, data=result, original_data=data,
- many=many, field_errors=field_errors)
- except ValidationError as err:
- errors.update(err.messages)
- try:
- self._invoke_validators(unmarshal, pass_many=False, data=result, original_data=data,
- many=many, field_errors=field_errors)
- except ValidationError as err:
- errors.update(err.messages)
- # Run post processors
- if not errors and postprocess:
- try:
- result = self._invoke_load_processors(
- POST_LOAD,
- result,
- many,
- original_data=data)
- except ValidationError as err:
- errors = err.normalized_messages()
- if errors:
- # TODO: Remove self.__error_handler__ in a later release
- if self.__error_handler__ and callable(self.__error_handler__):
- self.__error_handler__(errors, data)
- exc = ValidationError(
- errors,
- field_names=unmarshal.error_field_names,
- fields=unmarshal.error_fields,
- data=data,
- **unmarshal.error_kwargs
- )
- self.handle_error(exc, data)
- if self.strict:
- raise exc
-
- return result, errors
-
- def _normalize_nested_options(self):
- """Apply then flatten nested schema options"""
- if self.only is not None:
- # Apply the only option to nested fields.
- self.__apply_nested_option('only', self.only, 'intersection')
- # Remove the child field names from the only option.
- self.only = self.set_class(
- [field.split('.', 1)[0] for field in self.only],
- )
- if self.exclude:
- # Apply the exclude option to nested fields.
- self.__apply_nested_option('exclude', self.exclude, 'union')
- # Remove the parent field names from the exclude option.
- self.exclude = self.set_class(
- [field for field in self.exclude if '.' not in field],
- )
-
- def __apply_nested_option(self, option_name, field_names, set_operation):
- """Apply nested options to nested fields"""
- # Split nested field names on the first dot.
- nested_fields = [name.split('.', 1) for name in field_names if '.' in name]
- # Partition the nested field names by parent field.
- nested_options = defaultdict(list)
- for parent, nested_names in nested_fields:
- nested_options[parent].append(nested_names)
- # Apply the nested field options.
- for key, options in iter(nested_options.items()):
- new_options = self.set_class(options)
- original_options = getattr(self.declared_fields[key], option_name, ())
- if original_options:
- if set_operation == 'union':
- new_options |= self.set_class(original_options)
- if set_operation == 'intersection':
- new_options &= self.set_class(original_options)
- setattr(self.declared_fields[key], option_name, new_options)
-
- def _update_fields(self, obj=None, many=False):
- """Update fields based on the passed in object."""
- if self.only is not None:
- # Return only fields specified in only option
- if self.opts.fields:
- field_names = self.set_class(self.opts.fields) & self.set_class(self.only)
- else:
- field_names = self.set_class(self.only)
- elif self.opts.fields:
- # Return fields specified in fields option
- field_names = self.set_class(self.opts.fields)
- elif self.opts.additional:
- # Return declared fields + additional fields
- field_names = (self.set_class(self.declared_fields.keys()) |
- self.set_class(self.opts.additional))
- else:
- field_names = self.set_class(self.declared_fields.keys())
-
- # If "exclude" option or param is specified, remove those fields
- field_names -= self.exclude
- ret = self.__filter_fields(field_names, obj, many=many)
- # Set parents
- self.__set_field_attrs(ret)
- self.fields = ret
- return self.fields
-
- def on_bind_field(self, field_name, field_obj):
- """Hook to modify a field when it is bound to the `Schema`. No-op by default."""
- return None
-
- def __set_field_attrs(self, fields_dict):
- """Bind fields to the schema, setting any necessary attributes
- on the fields (e.g. parent and name).
-
- Also set field load_only and dump_only values if field_name was
- specified in ``class Meta``.
- """
- for field_name, field_obj in iteritems(fields_dict):
- try:
- if field_name in self.load_only:
- field_obj.load_only = True
- if field_name in self.dump_only:
- field_obj.dump_only = True
- field_obj._add_to_schema(field_name, self)
- self.on_bind_field(field_name, field_obj)
- except TypeError:
- # field declared as a class, not an instance
- if (isinstance(field_obj, type) and
- issubclass(field_obj, base.FieldABC)):
- msg = ('Field for "{0}" must be declared as a '
- 'Field instance, not a class. '
- 'Did you mean "fields.{1}()"?'
- .format(field_name, field_obj.__name__))
- raise TypeError(msg)
- return fields_dict
-
- def __filter_fields(self, field_names, obj, many=False):
- """Return only those field_name:field_obj pairs specified by
- ``field_names``.
-
- :param set field_names: Field names to include in the final
- return dictionary.
- :param object|Mapping|list obj The object to base filtered fields on.
- :returns: An dict of field_name:field_obj pairs.
- """
- if obj and many:
- try: # list
- obj = obj[0]
- except IndexError: # Nothing to serialize
- return dict((k, v) for k, v in self.declared_fields.items() if k in field_names)
- ret = self.dict_class()
- for key in field_names:
- if key in self.declared_fields:
- ret[key] = self.declared_fields[key]
- else: # Implicit field creation (class Meta 'fields' or 'additional')
- if obj:
- attribute_type = None
- try:
- if isinstance(obj, Mapping):
- attribute_type = type(obj[key])
- else:
- attribute_type = type(getattr(obj, key))
- except (AttributeError, KeyError) as err:
- err_type = type(err)
- raise err_type(
- '"{0}" is not a valid field for {1}.'.format(key, obj))
- field_obj = self.TYPE_MAPPING.get(attribute_type, fields.Field)()
- else: # Object is None
- field_obj = fields.Field()
- # map key -> field (default to Raw)
- ret[key] = field_obj
- return ret
-
- def _invoke_dump_processors(self, tag_name, data, many, original_data=None):
- # The pass_many post-dump processors may do things like add an envelope, so
- # invoke those after invoking the non-pass_many processors which will expect
- # to get a list of items.
- data = self._invoke_processors(tag_name, pass_many=False,
- data=data, many=many, original_data=original_data)
- data = self._invoke_processors(tag_name, pass_many=True,
- data=data, many=many, original_data=original_data)
- return data
-
- def _invoke_load_processors(self, tag_name, data, many, original_data=None):
- # This has to invert the order of the dump processors, so run the pass_many
- # processors first.
- data = self._invoke_processors(tag_name, pass_many=True,
- data=data, many=many, original_data=original_data)
- data = self._invoke_processors(tag_name, pass_many=False,
- data=data, many=many, original_data=original_data)
- return data
-
- def _invoke_field_validators(self, unmarshal, data, many):
- for attr_name in self.__processors__[(VALIDATES, False)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_kwargs__[(VALIDATES, False)]
- field_name = validator_kwargs['field_name']
-
- try:
- field_obj = self.fields[field_name]
- except KeyError:
- if field_name in self.declared_fields:
- continue
- raise ValueError('"{0}" field does not exist.'.format(field_name))
-
- if many:
- for idx, item in enumerate(data):
- try:
- value = item[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = unmarshal.call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.load_from or field_name,
- field_obj=field_obj,
- index=(idx if self.opts.index_errors else None)
- )
- if validated_value is missing:
- data[idx].pop(field_name, None)
- else:
- try:
- value = data[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = unmarshal.call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.load_from or field_name,
- field_obj=field_obj
- )
- if validated_value is missing:
- data.pop(field_name, None)
-
- def _invoke_validators(
- self, unmarshal, pass_many, data, original_data, many, field_errors=False):
- errors = {}
- for attr_name in self.__processors__[(VALIDATES_SCHEMA, pass_many)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_kwargs__[(VALIDATES_SCHEMA, pass_many)]
- pass_original = validator_kwargs.get('pass_original', False)
-
- skip_on_field_errors = validator_kwargs['skip_on_field_errors']
- if skip_on_field_errors and field_errors:
- continue
-
- if pass_many:
- validator = functools.partial(validator, many=many)
- if many and not pass_many:
- for idx, item in enumerate(data):
- try:
- unmarshal.run_validator(validator,
- item, original_data, self.fields, many=many,
- index=idx, pass_original=pass_original)
- except ValidationError as err:
- errors.update(err.messages)
- else:
- try:
- unmarshal.run_validator(validator,
- data, original_data, self.fields, many=many,
- pass_original=pass_original)
- except ValidationError as err:
- errors.update(err.messages)
- if errors:
- raise ValidationError(errors)
- return None
-
- def _invoke_processors(self, tag_name, pass_many, data, many, original_data=None):
- for attr_name in self.__processors__[(tag_name, pass_many)]:
- # This will be a bound method.
- processor = getattr(self, attr_name)
-
- processor_kwargs = processor.__marshmallow_kwargs__[(tag_name, pass_many)]
- pass_original = processor_kwargs.get('pass_original', False)
-
- if pass_many:
- if pass_original:
- data = utils.if_none(processor(data, many, original_data), data)
- else:
- data = utils.if_none(processor(data, many), data)
- elif many:
- if pass_original:
- data = [utils.if_none(processor(item, original_data), item)
- for item in data]
- else:
- data = [utils.if_none(processor(item), item) for item in data]
- else:
- if pass_original:
- data = utils.if_none(processor(data, original_data), data)
- else:
- data = utils.if_none(processor(data), data)
- return data
-
-
-class Schema(with_metaclass(SchemaMeta, BaseSchema)):
- __doc__ = BaseSchema.__doc__
+ self
\ No newline at end of fileStdout
[anthropic-agent] instance=marshmallow-code__marshmallow-1343 [anthropic-agent] repo=marshmallow-code/marshmallow [anthropic-agent] sandbox=d775c89c-8908-41be-af7e-7b37ac23153c [anthropic-agent] model=claude-sonnet-4-5-20250929 [anthropic-agent] context_files=5 [anthropic-agent] full_file_context=yes [anthropic-agent] edit_attempts=2
Stderr
[anthropic-agent] Attempt 1: File-rewrite plan contained no files.
Agent output
diff --git a/src/marshmallow/schema.py b/src/marshmallow/schema.py
index 085c509b..9396636f 100644
--- a/src/marshmallow/schema.py
+++ b/src/marshmallow/schema.py
@@ -369,600 +369,4 @@ class BaseSchema(base.SchemaABC):
)
self.extra = extra
self.context = context or {}
- self._normalize_nested_options()
- self._types_seen = set()
- self._update_fields(many=many)
-
- def __repr__(self):
- return '<{ClassName}(many={self.many}, strict={self.strict})>'.format(
- ClassName=self.__class__.__name__, self=self
- )
-
- def _postprocess(self, data, many, obj):
- if self.extra:
- if many:
- for each in data:
- each.update(self.extra)
- else:
- data.update(self.extra)
- return data
-
- @property
- def dict_class(self):
- return OrderedDict if self.ordered else dict
-
- @property
- def set_class(self):
- return OrderedSet if self.ordered else set
-
- ##### Override-able methods #####
-
- def handle_error(self, error, data):
- """Custom error handler function for the schema.
-
- :param ValidationError error: The `ValidationError` raised during (de)serialization.
- :param data: The original input data.
-
- .. versionadded:: 2.0.0
- """
- pass
-
- def get_attribute(self, attr, obj, default):
- """Defines how to pull values from an object to serialize.
-
- .. versionadded:: 2.0.0
- """
- return utils.get_value(attr, obj, default)
-
- ##### Handler decorators (deprecated) #####
-
- @classmethod
- def error_handler(cls, func):
- """Decorator that registers an error handler function for the schema.
- The function receives the :class:`Schema` instance, a dictionary of errors,
- and the serialized object (if serializing data) or data dictionary (if
- deserializing data) as arguments.
-
- Example: ::
-
- class UserSchema(Schema):
- email = fields.Email()
-
- @UserSchema.error_handler
- def handle_errors(schema, errors, obj):
- raise ValueError('An error occurred while marshalling {}'.format(obj))
-
- user = User(email='invalid')
- UserSchema().dump(user) # => raises ValueError
- UserSchema().load({'email': 'bademail'}) # raises ValueError
-
- .. versionadded:: 0.7.0
- .. deprecated:: 2.0.0
- Set the ``error_handler`` class Meta option instead.
- """
- warnings.warn(
- 'Schema.error_handler is deprecated. Set the error_handler class Meta option '
- 'instead.', category=DeprecationWarning
- )
- cls.__error_handler__ = func
- return func
-
- @classmethod
- def accessor(cls, func):
- """Decorator that registers a function for pulling values from an object
- to serialize. The function receives the :class:`Schema` instance, the
- ``key`` of the value to get, the ``obj`` to serialize, and an optional
- ``default`` value.
-
- .. deprecated:: 2.0.0
- Set the ``error_handler`` class Meta option instead.
- """
- warnings.warn(
- 'Schema.accessor is deprecated. Set the accessor class Meta option '
- 'instead.', category=DeprecationWarning
- )
- cls.__accessor__ = func
- return func
-
- ##### Serialization/Deserialization API #####
-
- def dump(self, obj, many=None, update_fields=True, **kwargs):
- """Serialize an object to native Python data types according to this
- Schema's fields.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :param bool update_fields: Whether to update the schema's field classes. Typically
- set to `True`, but may be `False` when serializing a homogenous collection.
- This parameter is used by `fields.Nested` to avoid multiple updates.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `MarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- # Callable marshalling object
- marshal = marshalling.Marshaller(prefix=self.prefix)
- errors = {}
- many = self.many if many is None else bool(many)
- if many and utils.is_iterable_but_not_string(obj):
- obj = list(obj)
-
- if self._has_processors:
- try:
- processed_obj = self._invoke_dump_processors(
- PRE_DUMP,
- obj,
- many,
- original_data=obj)
- except ValidationError as error:
- errors = error.normalized_messages()
- result = None
- else:
- processed_obj = obj
-
- if not errors:
- if update_fields:
- obj_type = type(processed_obj)
- if obj_type not in self._types_seen:
- self._update_fields(processed_obj, many=many)
- if not isinstance(processed_obj, Mapping):
- self._types_seen.add(obj_type)
-
- try:
- preresult = marshal(
- processed_obj,
- self.fields,
- many=many,
- # TODO: Remove self.__accessor__ in a later release
- accessor=self.get_attribute or self.__accessor__,
- dict_class=self.dict_class,
- index_errors=self.opts.index_errors,
- **kwargs
- )
- except ValidationError as error:
- errors = marshal.errors
- preresult = error.data
-
- result = self._postprocess(preresult, many, obj=obj)
-
- if not errors and self._has_processors:
- try:
- result = self._invoke_dump_processors(
- POST_DUMP,
- result,
- many,
- original_data=obj)
- except ValidationError as error:
- errors = error.normalized_messages()
- if errors:
- # TODO: Remove self.__error_handler__ in a later release
- if self.__error_handler__ and callable(self.__error_handler__):
- self.__error_handler__(errors, obj)
- exc = ValidationError(
- errors,
- field_names=marshal.error_field_names,
- fields=marshal.error_fields,
- data=obj,
- **marshal.error_kwargs
- )
- self.handle_error(exc, obj)
- if self.strict:
- raise exc
-
- return MarshalResult(result, errors)
-
- def dumps(self, obj, many=None, update_fields=True, *args, **kwargs):
- """Same as :meth:`dump`, except return a JSON-encoded string.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :param bool update_fields: Whether to update the schema's field classes. Typically
- set to `True`, but may be `False` when serializing a homogenous collection.
- This parameter is used by `fields.Nested` to avoid multiple updates.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `MarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- deserialized, errors = self.dump(obj, many=many, update_fields=update_fields)
- ret = self.opts.json_module.dumps(deserialized, *args, **kwargs)
- return MarshalResult(ret, errors)
-
- def load(self, data, many=None, partial=None):
- """Deserialize a data structure to an object defined by this Schema's
- fields and :meth:`make_object`.
-
- :param dict data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `UnmarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- result, errors = self._do_load(data, many, partial=partial, postprocess=True)
- return UnmarshalResult(data=result, errors=errors)
-
- def loads(self, json_data, many=None, *args, **kwargs):
- """Same as :meth:`load`, except it takes a JSON string as input.
-
- :param str json_data: A JSON string of the data to deserialize.
- :param bool many: Whether to deserialize `obj` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `UnmarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- # TODO: This avoids breaking backward compatibility if people were
- # passing in positional args after `many` for use by `json.loads`, but
- # ideally we shouldn't have to do this.
- partial = kwargs.pop('partial', None)
-
- data = self.opts.json_module.loads(json_data, *args, **kwargs)
- return self.load(data, many=many, partial=partial)
-
- def validate(self, data, many=None, partial=None):
- """Validate `data` against the schema, returning a dictionary of
- validation errors.
-
- :param dict data: The data to validate.
- :param bool many: Whether to validate `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A dictionary of validation errors.
- :rtype: dict
-
- .. versionadded:: 1.1.0
- """
- _, errors = self._do_load(data, many, partial=partial, postprocess=False)
- return errors
-
- ##### Private Helpers #####
-
- def _do_load(self, data, many=None, partial=None, postprocess=True):
- """Deserialize `data`, returning the deserialized result and a dictonary of
- validation errors.
-
- :param data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to validate required fields. If its value is an iterable,
- only fields listed in that iterable will be ignored will be allowed missing.
- If `True`, all fields will be allowed missing.
- If `None`, the value for `self.partial` is used.
- :param bool postprocess: Whether to run post_load methods..
- :return: A tuple of the form (`data`, `errors`)
- """
- # Callable unmarshalling object
- unmarshal = marshalling.Unmarshaller()
- errors = {}
- many = self.many if many is None else bool(many)
- if partial is None:
- partial = self.partial
- try:
- processed_data = self._invoke_load_processors(
- PRE_LOAD,
- data,
- many,
- original_data=data)
- except ValidationError as err:
- errors = err.normalized_messages()
- result = None
- if not errors:
- try:
- result = unmarshal(
- processed_data,
- self.fields,
- many=many,
- partial=partial,
- dict_class=self.dict_class,
- index_errors=self.opts.index_errors,
- )
- except ValidationError as error:
- result = error.data
- self._invoke_field_validators(unmarshal, data=result, many=many)
- errors = unmarshal.errors
- field_errors = bool(errors)
- # Run schema-level migration
- try:
- self._invoke_validators(unmarshal, pass_many=True, data=result, original_data=data,
- many=many, field_errors=field_errors)
- except ValidationError as err:
- errors.update(err.messages)
- try:
- self._invoke_validators(unmarshal, pass_many=False, data=result, original_data=data,
- many=many, field_errors=field_errors)
- except ValidationError as err:
- errors.update(err.messages)
- # Run post processors
- if not errors and postprocess:
- try:
- result = self._invoke_load_processors(
- POST_LOAD,
- result,
- many,
- original_data=data)
- except ValidationError as err:
- errors = err.normalized_messages()
- if errors:
- # TODO: Remove self.__error_handler__ in a later release
- if self.__error_handler__ and callable(self.__error_handler__):
- self.__error_handler__(errors, data)
- exc = ValidationError(
- errors,
- field_names=unmarshal.error_field_names,
- fields=unmarshal.error_fields,
- data=data,
- **unmarshal.error_kwargs
- )
- self.handle_error(exc, data)
- if self.strict:
- raise exc
-
- return result, errors
-
- def _normalize_nested_options(self):
- """Apply then flatten nested schema options"""
- if self.only is not None:
- # Apply the only option to nested fields.
- self.__apply_nested_option('only', self.only, 'intersection')
- # Remove the child field names from the only option.
- self.only = self.set_class(
- [field.split('.', 1)[0] for field in self.only],
- )
- if self.exclude:
- # Apply the exclude option to nested fields.
- self.__apply_nested_option('exclude', self.exclude, 'union')
- # Remove the parent field names from the exclude option.
- self.exclude = self.set_class(
- [field for field in self.exclude if '.' not in field],
- )
-
- def __apply_nested_option(self, option_name, field_names, set_operation):
- """Apply nested options to nested fields"""
- # Split nested field names on the first dot.
- nested_fields = [name.split('.', 1) for name in field_names if '.' in name]
- # Partition the nested field names by parent field.
- nested_options = defaultdict(list)
- for parent, nested_names in nested_fields:
- nested_options[parent].append(nested_names)
- # Apply the nested field options.
- for key, options in iter(nested_options.items()):
- new_options = self.set_class(options)
- original_options = getattr(self.declared_fields[key], option_name, ())
- if original_options:
- if set_operation == 'union':
- new_options |= self.set_class(original_options)
- if set_operation == 'intersection':
- new_options &= self.set_class(original_options)
- setattr(self.declared_fields[key], option_name, new_options)
-
- def _update_fields(self, obj=None, many=False):
- """Update fields based on the passed in object."""
- if self.only is not None:
- # Return only fields specified in only option
- if self.opts.fields:
- field_names = self.set_class(self.opts.fields) & self.set_class(self.only)
- else:
- field_names = self.set_class(self.only)
- elif self.opts.fields:
- # Return fields specified in fields option
- field_names = self.set_class(self.opts.fields)
- elif self.opts.additional:
- # Return declared fields + additional fields
- field_names = (self.set_class(self.declared_fields.keys()) |
- self.set_class(self.opts.additional))
- else:
- field_names = self.set_class(self.declared_fields.keys())
-
- # If "exclude" option or param is specified, remove those fields
- field_names -= self.exclude
- ret = self.__filter_fields(field_names, obj, many=many)
- # Set parents
- self.__set_field_attrs(ret)
- self.fields = ret
- return self.fields
-
- def on_bind_field(self, field_name, field_obj):
- """Hook to modify a field when it is bound to the `Schema`. No-op by default."""
- return None
-
- def __set_field_attrs(self, fields_dict):
- """Bind fields to the schema, setting any necessary attributes
- on the fields (e.g. parent and name).
-
- Also set field load_only and dump_only values if field_name was
- specified in ``class Meta``.
- """
- for field_name, field_obj in iteritems(fields_dict):
- try:
- if field_name in self.load_only:
- field_obj.load_only = True
- if field_name in self.dump_only:
- field_obj.dump_only = True
- field_obj._add_to_schema(field_name, self)
- self.on_bind_field(field_name, field_obj)
- except TypeError:
- # field declared as a class, not an instance
- if (isinstance(field_obj, type) and
- issubclass(field_obj, base.FieldABC)):
- msg = ('Field for "{0}" must be declared as a '
- 'Field instance, not a class. '
- 'Did you mean "fields.{1}()"?'
- .format(field_name, field_obj.__name__))
- raise TypeError(msg)
- return fields_dict
-
- def __filter_fields(self, field_names, obj, many=False):
- """Return only those field_name:field_obj pairs specified by
- ``field_names``.
-
- :param set field_names: Field names to include in the final
- return dictionary.
- :param object|Mapping|list obj The object to base filtered fields on.
- :returns: An dict of field_name:field_obj pairs.
- """
- if obj and many:
- try: # list
- obj = obj[0]
- except IndexError: # Nothing to serialize
- return dict((k, v) for k, v in self.declared_fields.items() if k in field_names)
- ret = self.dict_class()
- for key in field_names:
- if key in self.declared_fields:
- ret[key] = self.declared_fields[key]
- else: # Implicit field creation (class Meta 'fields' or 'additional')
- if obj:
- attribute_type = None
- try:
- if isinstance(obj, Mapping):
- attribute_type = type(obj[key])
- else:
- attribute_type = type(getattr(obj, key))
- except (AttributeError, KeyError) as err:
- err_type = type(err)
- raise err_type(
- '"{0}" is not a valid field for {1}.'.format(key, obj))
- field_obj = self.TYPE_MAPPING.get(attribute_type, fields.Field)()
- else: # Object is None
- field_obj = fields.Field()
- # map key -> field (default to Raw)
- ret[key] = field_obj
- return ret
-
- def _invoke_dump_processors(self, tag_name, data, many, original_data=None):
- # The pass_many post-dump processors may do things like add an envelope, so
- # invoke those after invoking the non-pass_many processors which will expect
- # to get a list of items.
- data = self._invoke_processors(tag_name, pass_many=False,
- data=data, many=many, original_data=original_data)
- data = self._invoke_processors(tag_name, pass_many=True,
- data=data, many=many, original_data=original_data)
- return data
-
- def _invoke_load_processors(self, tag_name, data, many, original_data=None):
- # This has to invert the order of the dump processors, so run the pass_many
- # processors first.
- data = self._invoke_processors(tag_name, pass_many=True,
- data=data, many=many, original_data=original_data)
- data = self._invoke_processors(tag_name, pass_many=False,
- data=data, many=many, original_data=original_data)
- return data
-
- def _invoke_field_validators(self, unmarshal, data, many):
- for attr_name in self.__processors__[(VALIDATES, False)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_kwargs__[(VALIDATES, False)]
- field_name = validator_kwargs['field_name']
-
- try:
- field_obj = self.fields[field_name]
- except KeyError:
- if field_name in self.declared_fields:
- continue
- raise ValueError('"{0}" field does not exist.'.format(field_name))
-
- if many:
- for idx, item in enumerate(data):
- try:
- value = item[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = unmarshal.call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.load_from or field_name,
- field_obj=field_obj,
- index=(idx if self.opts.index_errors else None)
- )
- if validated_value is missing:
- data[idx].pop(field_name, None)
- else:
- try:
- value = data[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = unmarshal.call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.load_from or field_name,
- field_obj=field_obj
- )
- if validated_value is missing:
- data.pop(field_name, None)
-
- def _invoke_validators(
- self, unmarshal, pass_many, data, original_data, many, field_errors=False):
- errors = {}
- for attr_name in self.__processors__[(VALIDATES_SCHEMA, pass_many)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_kwargs__[(VALIDATES_SCHEMA, pass_many)]
- pass_original = validator_kwargs.get('pass_original', False)
-
- skip_on_field_errors = validator_kwargs['skip_on_field_errors']
- if skip_on_field_errors and field_errors:
- continue
-
- if pass_many:
- validator = functools.partial(validator, many=many)
- if many and not pass_many:
- for idx, item in enumerate(data):
- try:
- unmarshal.run_validator(validator,
- item, original_data, self.fields, many=many,
- index=idx, pass_original=pass_original)
- except ValidationError as err:
- errors.update(err.messages)
- else:
- try:
- unmarshal.run_validator(validator,
- data, original_data, self.fields, many=many,
- pass_original=pass_original)
- except ValidationError as err:
- errors.update(err.messages)
- if errors:
- raise ValidationError(errors)
- return None
-
- def _invoke_processors(self, tag_name, pass_many, data, many, original_data=None):
- for attr_name in self.__processors__[(tag_name, pass_many)]:
- # This will be a bound method.
- processor = getattr(self, attr_name)
-
- processor_kwargs = processor.__marshmallow_kwargs__[(tag_name, pass_many)]
- pass_original = processor_kwargs.get('pass_original', False)
-
- if pass_many:
- if pass_original:
- data = utils.if_none(processor(data, many, original_data), data)
- else:
- data = utils.if_none(processor(data, many), data)
- elif many:
- if pass_original:
- data = [utils.if_none(processor(item, original_data), item)
- for item in data]
- else:
- data = [utils.if_none(processor(item), item) for item in data]
- else:
- if pass_original:
- data = utils.if_none(processor(data, original_data), data)
- else:
- data = utils.if_none(processor(data), data)
- return data
-
-
-class Schema(with_metaclass(SchemaMeta, BaseSchema)):
- __doc__ = BaseSchema.__doc__
+ self
\ No newline at end of fileScoring
Passing target tests
No fail-to-pass successes recorded yet.
Failing target tests
tests/test_marshalling.py::TestUnmarshaller::test_deserialize_wrong_nested_type_with_validates_method
Maintained regression tests
No pass-to-pass successes recorded yet.
Regressed tests
tests/test_marshalling.py::test_missing_is_falsy tests/test_marshalling.py::TestMarshaller::test_prefix tests/test_marshalling.py::TestMarshaller::test_marshalling_generator tests/test_marshalling.py::TestMarshaller::test_default_to_missing tests/test_marshalling.py::TestMarshaller::test_serialize_fields_with_load_only_param tests/test_marshalling.py::TestMarshaller::test_missing_data_are_skipped tests/test_marshalling.py::TestMarshaller::test_serialize_with_load_only_doesnt_validate tests/test_marshalling.py::TestMarshaller::test_serialize_fields_with_dump_to_param tests/test_marshalling.py::TestMarshaller::test_serialize_fields_with_dump_to_and_prefix_params tests/test_marshalling.py::TestMarshaller::test_stores_indices_of_errors_when_many_equals_true tests/test_marshalling.py::TestMarshaller::test_doesnt_store_errors_when_index_errors_equals_false tests/test_marshalling.py::TestUnmarshaller::test_extra_data_is_ignored tests/test_marshalling.py::TestUnmarshaller::test_stores_errors tests/test_marshalling.py::TestUnmarshaller::test_stores_indices_of_errors_when_many_equals_true tests/test_marshalling.py::TestUnmarshaller::test_doesnt_store_errors_when_index_errors_equals_false tests/test_marshalling.py::TestUnmarshaller::test_deserialize tests/test_marshalling.py::TestUnmarshaller::test_extra_fields tests/test_marshalling.py::TestUnmarshaller::test_deserialize_many tests/test_marshalling.py::TestUnmarshaller::test_deserialize_stores_errors tests/test_marshalling.py::TestUnmarshaller::test_deserialize_fields_with_attribute_param tests/test_marshalling.py::TestUnmarshaller::test_deserialize_fields_with_load_from_param tests/test_marshalling.py::TestUnmarshaller::test_deserialize_fields_with_dump_only_param tests/test_marshalling.py::TestUnmarshaller::test_deserialize_wrong_type_root_data tests/test_marshalling.py::TestUnmarshaller::test_deserialize_wrong_type_nested_data
Harness output
+ source /opt/miniconda3/bin/activate
++ _CONDA_ROOT=/opt/miniconda3
++ . /opt/miniconda3/etc/profile.d/conda.sh
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ '[' -z x ']'
++ conda activate
++ local cmd=activate
++ case "$cmd" in
++ __conda_activate activate
++ '[' -n '' ']'
++ local ask_conda
+++ PS1=
+++ __conda_exe shell.posix activate
+++ /opt/miniconda3/bin/conda shell.posix activate
++ ask_conda='PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''2'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_1='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ eval 'PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''2'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_1='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+++ PS1='(base) '
+++ export PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ export CONDA_PREFIX=/opt/miniconda3
+++ CONDA_PREFIX=/opt/miniconda3
+++ export CONDA_SHLVL=2
+++ CONDA_SHLVL=2
+++ export CONDA_DEFAULT_ENV=base
+++ CONDA_DEFAULT_ENV=base
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
+++ CONDA_PROMPT_MODIFIER='(base) '
+++ export CONDA_PREFIX_1=/opt/miniconda3/envs/testbed
+++ CONDA_PREFIX_1=/opt/miniconda3/envs/testbed
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ __conda_hashr
++ '[' -n '' ']'
++ '[' -n '' ']'
++ hash -r
+ conda activate testbed
+ local cmd=activate
+ case "$cmd" in
+ __conda_activate activate testbed
+ '[' -n '' ']'
+ local ask_conda
++ PS1='(base) '
++ __conda_exe shell.posix activate testbed
++ /opt/miniconda3/bin/conda shell.posix activate testbed
+ ask_conda='PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''3'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_2='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+ eval 'PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''3'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_2='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ PS1='(testbed) '
++ export PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ export CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ export CONDA_SHLVL=3
++ CONDA_SHLVL=3
++ export CONDA_DEFAULT_ENV=testbed
++ CONDA_DEFAULT_ENV=testbed
++ export 'CONDA_PROMPT_MODIFIER=(testbed) '
++ CONDA_PROMPT_MODIFIER='(testbed) '
++ export CONDA_PREFIX_2=/opt/miniconda3
++ CONDA_PREFIX_2=/opt/miniconda3
++ export CONDA_EXE=/opt/miniconda3/bin/conda
++ CONDA_EXE=/opt/miniconda3/bin/conda
++ export _CE_M=
++ _CE_M=
++ export _CE_CONDA=
++ _CE_CONDA=
++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+ __conda_hashr
+ '[' -n '' ']'
+ '[' -n '' ']'
+ hash -r
+ cd /testbed
+ git config --global --add safe.directory /testbed
+ cd /testbed
+ git status
On branch dev
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: src/marshmallow/schema.py
no changes added to commit (use "git add" and/or "git commit -a")
+ git show
commit d4f98233971ee3e73bba47026cd77b4ba69a25bb
Author: SWE-bench <setup@swebench.config>
Date: Tue May 6 23:07:21 2025 +0000
SWE-bench
+ git -c core.fileMode=false diff 2be2d83a1a9a6d3d9b85804f3ab545cecc409bb0
diff --git a/src/marshmallow/schema.py b/src/marshmallow/schema.py
index 085c509b..9396636f 100644
--- a/src/marshmallow/schema.py
+++ b/src/marshmallow/schema.py
@@ -369,600 +369,4 @@ class BaseSchema(base.SchemaABC):
)
self.extra = extra
self.context = context or {}
- self._normalize_nested_options()
- self._types_seen = set()
- self._update_fields(many=many)
-
- def __repr__(self):
- return '<{ClassName}(many={self.many}, strict={self.strict})>'.format(
- ClassName=self.__class__.__name__, self=self
- )
-
- def _postprocess(self, data, many, obj):
- if self.extra:
- if many:
- for each in data:
- each.update(self.extra)
- else:
- data.update(self.extra)
- return data
-
- @property
- def dict_class(self):
- return OrderedDict if self.ordered else dict
-
- @property
- def set_class(self):
- return OrderedSet if self.ordered else set
-
- ##### Override-able methods #####
-
- def handle_error(self, error, data):
- """Custom error handler function for the schema.
-
- :param ValidationError error: The `ValidationError` raised during (de)serialization.
- :param data: The original input data.
-
- .. versionadded:: 2.0.0
- """
- pass
-
- def get_attribute(self, attr, obj, default):
- """Defines how to pull values from an object to serialize.
-
- .. versionadded:: 2.0.0
- """
- return utils.get_value(attr, obj, default)
-
- ##### Handler decorators (deprecated) #####
-
- @classmethod
- def error_handler(cls, func):
- """Decorator that registers an error handler function for the schema.
- The function receives the :class:`Schema` instance, a dictionary of errors,
- and the serialized object (if serializing data) or data dictionary (if
- deserializing data) as arguments.
-
- Example: ::
-
- class UserSchema(Schema):
- email = fields.Email()
-
- @UserSchema.error_handler
- def handle_errors(schema, errors, obj):
- raise ValueError('An error occurred while marshalling {}'.format(obj))
-
- user = User(email='invalid')
- UserSchema().dump(user) # => raises ValueError
- UserSchema().load({'email': 'bademail'}) # raises ValueError
-
- .. versionadded:: 0.7.0
- .. deprecated:: 2.0.0
- Set the ``error_handler`` class Meta option instead.
- """
- warnings.warn(
- 'Schema.error_handler is deprecated. Set the error_handler class Meta option '
- 'instead.', category=DeprecationWarning
- )
- cls.__error_handler__ = func
- return func
-
- @classmethod
- def accessor(cls, func):
- """Decorator that registers a function for pulling values from an object
- to serialize. The function receives the :class:`Schema` instance, the
- ``key`` of the value to get, the ``obj`` to serialize, and an optional
- ``default`` value.
-
- .. deprecated:: 2.0.0
- Set the ``error_handler`` class Meta option instead.
- """
- warnings.warn(
- 'Schema.accessor is deprecated. Set the accessor class Meta option '
- 'instead.', category=DeprecationWarning
- )
- cls.__accessor__ = func
- return func
-
- ##### Serialization/Deserialization API #####
-
- def dump(self, obj, many=None, update_fields=True, **kwargs):
- """Serialize an object to native Python data types according to this
- Schema's fields.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :param bool update_fields: Whether to update the schema's field classes. Typically
- set to `True`, but may be `False` when serializing a homogenous collection.
- This parameter is used by `fields.Nested` to avoid multiple updates.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `MarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- # Callable marshalling object
- marshal = marshalling.Marshaller(prefix=self.prefix)
- errors = {}
- many = self.many if many is None else bool(many)
- if many and utils.is_iterable_but_not_string(obj):
- obj = list(obj)
-
- if self._has_processors:
- try:
- processed_obj = self._invoke_dump_processors(
- PRE_DUMP,
- obj,
- many,
- original_data=obj)
- except ValidationError as error:
- errors = error.normalized_messages()
- result = None
- else:
- processed_obj = obj
-
- if not errors:
- if update_fields:
- obj_type = type(processed_obj)
- if obj_type not in self._types_seen:
- self._update_fields(processed_obj, many=many)
- if not isinstance(processed_obj, Mapping):
- self._types_seen.add(obj_type)
-
- try:
- preresult = marshal(
- processed_obj,
- self.fields,
- many=many,
- # TODO: Remove self.__accessor__ in a later release
- accessor=self.get_attribute or self.__accessor__,
- dict_class=self.dict_class,
- index_errors=self.opts.index_errors,
- **kwargs
- )
- except ValidationError as error:
- errors = marshal.errors
- preresult = error.data
-
- result = self._postprocess(preresult, many, obj=obj)
-
- if not errors and self._has_processors:
- try:
- result = self._invoke_dump_processors(
- POST_DUMP,
- result,
- many,
- original_data=obj)
- except ValidationError as error:
- errors = error.normalized_messages()
- if errors:
- # TODO: Remove self.__error_handler__ in a later release
- if self.__error_handler__ and callable(self.__error_handler__):
- self.__error_handler__(errors, obj)
- exc = ValidationError(
- errors,
- field_names=marshal.error_field_names,
- fields=marshal.error_fields,
- data=obj,
- **marshal.error_kwargs
- )
- self.handle_error(exc, obj)
- if self.strict:
- raise exc
-
- return MarshalResult(result, errors)
-
- def dumps(self, obj, many=None, update_fields=True, *args, **kwargs):
- """Same as :meth:`dump`, except return a JSON-encoded string.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :param bool update_fields: Whether to update the schema's field classes. Typically
- set to `True`, but may be `False` when serializing a homogenous collection.
- This parameter is used by `fields.Nested` to avoid multiple updates.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `MarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- deserialized, errors = self.dump(obj, many=many, update_fields=update_fields)
- ret = self.opts.json_module.dumps(deserialized, *args, **kwargs)
- return MarshalResult(ret, errors)
-
- def load(self, data, many=None, partial=None):
- """Deserialize a data structure to an object defined by this Schema's
- fields and :meth:`make_object`.
-
- :param dict data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `UnmarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- result, errors = self._do_load(data, many, partial=partial, postprocess=True)
- return UnmarshalResult(data=result, errors=errors)
-
- def loads(self, json_data, many=None, *args, **kwargs):
- """Same as :meth:`load`, except it takes a JSON string as input.
-
- :param str json_data: A JSON string of the data to deserialize.
- :param bool many: Whether to deserialize `obj` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A tuple of the form (``data``, ``errors``)
- :rtype: `UnmarshalResult`, a `collections.namedtuple`
-
- .. versionadded:: 1.0.0
- """
- # TODO: This avoids breaking backward compatibility if people were
- # passing in positional args after `many` for use by `json.loads`, but
- # ideally we shouldn't have to do this.
- partial = kwargs.pop('partial', None)
-
- data = self.opts.json_module.loads(json_data, *args, **kwargs)
- return self.load(data, many=many, partial=partial)
-
- def validate(self, data, many=None, partial=None):
- """Validate `data` against the schema, returning a dictionary of
- validation errors.
-
- :param dict data: The data to validate.
- :param bool many: Whether to validate `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields. If `None`,
- the value for `self.partial` is used. If its value is an iterable,
- only missing fields listed in that iterable will be ignored.
- :return: A dictionary of validation errors.
- :rtype: dict
-
- .. versionadded:: 1.1.0
- """
- _, errors = self._do_load(data, many, partial=partial, postprocess=False)
- return errors
-
- ##### Private Helpers #####
-
- def _do_load(self, data, many=None, partial=None, postprocess=True):
- """Deserialize `data`, returning the deserialized result and a dictonary of
- validation errors.
-
- :param data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to validate required fields. If its value is an iterable,
- only fields listed in that iterable will be ignored will be allowed missing.
- If `True`, all fields will be allowed missing.
- If `None`, the value for `self.partial` is used.
- :param bool postprocess: Whether to run post_load methods..
- :return: A tuple of the form (`data`, `errors`)
- """
- # Callable unmarshalling object
- unmarshal = marshalling.Unmarshaller()
- errors = {}
- many = self.many if many is None else bool(many)
- if partial is None:
- partial = self.partial
- try:
- processed_data = self._invoke_load_processors(
- PRE_LOAD,
- data,
- many,
- original_data=data)
- except ValidationError as err:
- errors = err.normalized_messages()
- result = None
- if not errors:
- try:
- result = unmarshal(
- processed_data,
- self.fields,
- many=many,
- partial=partial,
- dict_class=self.dict_class,
- index_errors=self.opts.index_errors,
- )
- except ValidationError as error:
- result = error.data
- self._invoke_field_validators(unmarshal, data=result, many=many)
- errors = unmarshal.errors
- field_errors = bool(errors)
- # Run schema-level migration
- try:
- self._invoke_validators(unmarshal, pass_many=True, data=result, original_data=data,
- many=many, field_errors=field_errors)
- except ValidationError as err:
- errors.update(err.messages)
- try:
- self._invoke_validators(unmarshal, pass_many=False, data=result, original_data=data,
- many=many, field_errors=field_errors)
- except ValidationError as err:
- errors.update(err.messages)
- # Run post processors
- if not errors and postprocess:
- try:
- result = self._invoke_load_processors(
- POST_LOAD,
- result,
- many,
- original_data=data)
- except ValidationError as err:
- errors = err.normalized_messages()
- if errors:
- # TODO: Remove self.__error_handler__ in a later release
- if self.__error_handler__ and callable(self.__error_handler__):
- self.__error_handler__(errors, data)
- exc = ValidationError(
- errors,
- field_names=unmarshal.error_field_names,
- fields=unmarshal.error_fields,
- data=data,
- **unmarshal.error_kwargs
- )
- self.handle_error(exc, data)
- if self.strict:
- raise exc
-
- return result, errors
-
- def _normalize_nested_options(self):
- """Apply then flatten nested schema options"""
- if self.only is not None:
- # Apply the only option to nested fields.
- self.__apply_nested_option('only', self.only, 'intersection')
- # Remove the child field names from the only option.
- self.only = self.set_class(
- [field.split('.', 1)[0] for field in self.only],
- )
- if self.exclude:
- # Apply the exclude option to nested fields.
- self.__apply_nested_option('exclude', self.exclude, 'union')
- # Remove the parent field names from the exclude option.
- self.exclude = self.set_class(
- [field for field in self.exclude if '.' not in field],
- )
-
- def __apply_nested_option(self, option_name, field_names, set_operation):
- """Apply nested options to nested fields"""
- # Split nested field names on the first dot.
- nested_fields = [name.split('.', 1) for name in field_names if '.' in name]
- # Partition the nested field names by parent field.
- nested_options = defaultdict(list)
- for parent, nested_names in nested_fields:
- nested_options[parent].append(nested_names)
- # Apply the nested field options.
- for key, options in iter(nested_options.items()):
- new_options = self.set_class(options)
- original_options = getattr(self.declared_fields[key], option_name, ())
- if original_options:
- if set_operation == 'union':
- new_options |= self.set_class(original_options)
- if set_operation == 'intersection':
- new_options &= self.set_class(original_options)
- setattr(self.declared_fields[key], option_name, new_options)
-
- def _update_fields(self, obj=None, many=False):
- """Update fields based on the passed in object."""
- if self.only is not None:
- # Return only fields specified in only option
- if self.opts.fields:
- field_names = self.set_class(self.opts.fields) & self.set_class(self.only)
- else:
- field_names = self.set_class(self.only)
- elif self.opts.fields:
- # Return fields specified in fields option
- field_names = self.set_class(self.opts.fields)
- elif self.opts.additional:
- # Return declared fields + additional fields
- field_names = (self.set_class(self.declared_fields.keys()) |
- self.set_class(self.opts.additional))
- else:
- field_names = self.set_class(self.declared_fields.keys())
-
- # If "exclude" option or param is specified, remove those fields
- field_names -= self.exclude
- ret = self.__filter_fields(field_names, obj, many=many)
- # Set parents
- self.__set_field_attrs(ret)
- self.fields = ret
- return self.fields
-
- def on_bind_field(self, field_name, field_obj):
- """Hook to modify a field when it is bound to the `Schema`. No-op by default."""
- return None
-
- def __set_field_attrs(self, fields_dict):
- """Bind fields to the schema, setting any necessary attributes
- on the fields (e.g. parent and name).
-
- Also set field load_only and dump_only values if field_name was
- specified in ``class Meta``.
- """
- for field_name, field_obj in iteritems(fields_dict):
- try:
- if field_name in self.load_only:
- field_obj.load_only = True
- if field_name in self.dump_only:
- field_obj.dump_only = True
- field_obj._add_to_schema(field_name, self)
- self.on_bind_field(field_name, field_obj)
- except TypeError:
- # field declared as a class, not an instance
- if (isinstance(field_obj, type) and
- issubclass(field_obj, base.FieldABC)):
- msg = ('Field for "{0}" must be declared as a '
- 'Field instance, not a class. '
- 'Did you mean "fields.{1}()"?'
- .format(field_name, field_obj.__name__))
- raise TypeError(msg)
- return fields_dict
-
- def __filter_fields(self, field_names, obj, many=False):
- """Return only those field_name:field_obj pairs specified by
- ``field_names``.
-
- :param set field_names: Field names to include in the final
- return dictionary.
- :param object|Mapping|list obj The object to base filtered fields on.
- :returns: An dict of field_name:field_obj pairs.
- """
- if obj and many:
- try: # list
- obj = obj[0]
- except IndexError: # Nothing to serialize
- return dict((k, v) for k, v in self.declared_fields.items() if k in field_names)
- ret = self.dict_class()
- for key in field_names:
- if key in self.declared_fields:
- ret[key] = self.declared_fields[key]
- else: # Implicit field creation (class Meta 'fields' or 'additional')
- if obj:
- attribute_type = None
- try:
- if isinstance(obj, Mapping):
- attribute_type = type(obj[key])
- else:
- attribute_type = type(getattr(obj, key))
- except (AttributeError, KeyError) as err:
- err_type = type(err)
- raise err_type(
- '"{0}" is not a valid field for {1}.'.format(key, obj))
- field_obj = self.TYPE_MAPPING.get(attribute_type, fields.Field)()
- else: # Object is None
- field_obj = fields.Field()
- # map key -> field (default to Raw)
- ret[key] = field_obj
- return ret
-
- def _invoke_dump_processors(self, tag_name, data, many, original_data=None):
- # The pass_many post-dump processors may do things like add an envelope, so
- # invoke those after invoking the non-pass_many processors which will expect
- # to get a list of items.
- data = self._invoke_processors(tag_name, pass_many=False,
- data=data, many=many, original_data=original_data)
- data = self._invoke_processors(tag_name, pass_many=True,
- data=data, many=many, original_data=original_data)
- return data
-
- def _invoke_load_processors(self, tag_name, data, many, original_data=None):
- # This has to invert the order of the dump processors, so run the pass_many
- # processors first.
- data = self._invoke_processors(tag_name, pass_many=True,
- data=data, many=many, original_data=original_data)
- data = self._invoke_processors(tag_name, pass_many=False,
- data=data, many=many, original_data=original_data)
- return data
-
- def _invoke_field_validators(self, unmarshal, data, many):
- for attr_name in self.__processors__[(VALIDATES, False)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_kwargs__[(VALIDATES, False)]
- field_name = validator_kwargs['field_name']
-
- try:
- field_obj = self.fields[field_name]
- except KeyError:
- if field_name in self.declared_fields:
- continue
- raise ValueError('"{0}" field does not exist.'.format(field_name))
-
- if many:
- for idx, item in enumerate(data):
- try:
- value = item[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = unmarshal.call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.load_from or field_name,
- field_obj=field_obj,
- index=(idx if self.opts.index_errors else None)
- )
- if validated_value is missing:
- data[idx].pop(field_name, None)
- else:
- try:
- value = data[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = unmarshal.call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.load_from or field_name,
- field_obj=field_obj
- )
- if validated_value is missing:
- data.pop(field_name, None)
-
- def _invoke_validators(
- self, unmarshal, pass_many, data, original_data, many, field_errors=False):
- errors = {}
- for attr_name in self.__processors__[(VALIDATES_SCHEMA, pass_many)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_kwargs__[(VALIDATES_SCHEMA, pass_many)]
- pass_original = validator_kwargs.get('pass_original', False)
-
- skip_on_field_errors = validator_kwargs['skip_on_field_errors']
- if skip_on_field_errors and field_errors:
- continue
-
- if pass_many:
- validator = functools.partial(validator, many=many)
- if many and not pass_many:
- for idx, item in enumerate(data):
- try:
- unmarshal.run_validator(validator,
- item, original_data, self.fields, many=many,
- index=idx, pass_original=pass_original)
- except ValidationError as err:
- errors.update(err.messages)
- else:
- try:
- unmarshal.run_validator(validator,
- data, original_data, self.fields, many=many,
- pass_original=pass_original)
- except ValidationError as err:
- errors.update(err.messages)
- if errors:
- raise ValidationError(errors)
- return None
-
- def _invoke_processors(self, tag_name, pass_many, data, many, original_data=None):
- for attr_name in self.__processors__[(tag_name, pass_many)]:
- # This will be a bound method.
- processor = getattr(self, attr_name)
-
- processor_kwargs = processor.__marshmallow_kwargs__[(tag_name, pass_many)]
- pass_original = processor_kwargs.get('pass_original', False)
-
- if pass_many:
- if pass_original:
- data = utils.if_none(processor(data, many, original_data), data)
- else:
- data = utils.if_none(processor(data, many), data)
- elif many:
- if pass_original:
- data = [utils.if_none(processor(item, original_data), item)
- for item in data]
- else:
- data = [utils.if_none(processor(item), item) for item in data]
- else:
- if pass_original:
- data = utils.if_none(processor(data, original_data), data)
- else:
- data = utils.if_none(processor(data), data)
- return data
-
-
-class Schema(with_metaclass(SchemaMeta, BaseSchema)):
- __doc__ = BaseSchema.__doc__
+ self
\ No newline at end of file
+ source /opt/miniconda3/bin/activate
++ _CONDA_ROOT=/opt/miniconda3
++ . /opt/miniconda3/etc/profile.d/conda.sh
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ '[' -z x ']'
++ conda activate
++ local cmd=activate
++ case "$cmd" in
++ __conda_activate activate
++ '[' -n '' ']'
++ local ask_conda
+++ PS1='(testbed) '
+++ __conda_exe shell.posix activate
+++ /opt/miniconda3/bin/conda shell.posix activate
++ ask_conda='PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''4'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_3='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ eval 'PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''4'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_3='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+++ PS1='(base) '
+++ export PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ export CONDA_PREFIX=/opt/miniconda3
+++ CONDA_PREFIX=/opt/miniconda3
+++ export CONDA_SHLVL=4
+++ CONDA_SHLVL=4
+++ export CONDA_DEFAULT_ENV=base
+++ CONDA_DEFAULT_ENV=base
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
+++ CONDA_PROMPT_MODIFIER='(base) '
+++ export CONDA_PREFIX_3=/opt/miniconda3/envs/testbed
+++ CONDA_PREFIX_3=/opt/miniconda3/envs/testbed
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ __conda_hashr
++ '[' -n '' ']'
++ '[' -n '' ']'
++ hash -r
+ conda activate testbed
+ local cmd=activate
+ case "$cmd" in
+ __conda_activate activate testbed
+ '[' -n '' ']'
+ local ask_conda
++ PS1='(base) '
++ __conda_exe shell.posix activate testbed
++ /opt/miniconda3/bin/conda shell.posix activate testbed
+ ask_conda='PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''5'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_4='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+ eval 'PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''5'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_4='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ PS1='(testbed) '
++ export PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ export CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ export CONDA_SHLVL=5
++ CONDA_SHLVL=5
++ export CONDA_DEFAULT_ENV=testbed
++ CONDA_DEFAULT_ENV=testbed
++ export 'CONDA_PROMPT_MODIFIER=(testbed) '
++ CONDA_PROMPT_MODIFIER='(testbed) '
++ export CONDA_PREFIX_4=/opt/miniconda3
++ CONDA_PREFIX_4=/opt/miniconda3
++ export CONDA_EXE=/opt/miniconda3/bin/conda
++ CONDA_EXE=/opt/miniconda3/bin/conda
++ export _CE_M=
++ _CE_M=
++ export _CE_CONDA=
++ _CE_CONDA=
++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+ __conda_hashr
+ '[' -n '' ']'
+ '[' -n '' ']'
+ hash -r
+ python -m pip install -e '.[dev]'
Obtaining file:///testbed
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: python-dateutil in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==2.20.0) (2.9.0.post0)
Requirement already satisfied: simplejson in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==2.20.0) (3.20.1)
Requirement already satisfied: pytest in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==2.20.0) (8.3.5)
Requirement already satisfied: pytz in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==2.20.0) (2025.2)
Requirement already satisfied: flake8==3.7.4 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==2.20.0) (3.7.4)
Requirement already satisfied: tox in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==2.20.0) (4.25.0)
Requirement already satisfied: entrypoints<0.4.0,>=0.3.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.4->marshmallow==2.20.0) (0.3)
Requirement already satisfied: pyflakes<2.2.0,>=2.1.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.4->marshmallow==2.20.0) (2.1.1)
Requirement already satisfied: pycodestyle<2.6.0,>=2.5.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.4->marshmallow==2.20.0) (2.5.0)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.4->marshmallow==2.20.0) (0.6.1)
Requirement already satisfied: exceptiongroup>=1.0.0rc8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==2.20.0) (1.2.2)
Requirement already satisfied: iniconfig in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==2.20.0) (2.1.0)
Requirement already satisfied: packaging in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==2.20.0) (25.0)
Requirement already satisfied: pluggy<2,>=1.5 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==2.20.0) (1.5.0)
Requirement already satisfied: tomli>=1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==2.20.0) (2.2.1)
Requirement already satisfied: six>=1.5 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from python-dateutil->marshmallow==2.20.0) (1.17.0)
Requirement already satisfied: cachetools>=5.5.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (5.5.2)
Requirement already satisfied: chardet>=5.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (5.2.0)
Requirement already satisfied: colorama>=0.4.6 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (0.4.6)
Requirement already satisfied: filelock>=3.16.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (3.18.0)
Requirement already satisfied: platformdirs>=4.3.6 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (4.3.7)
Requirement already satisfied: pyproject-api>=1.8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (1.9.0)
Requirement already satisfied: typing-extensions>=4.12.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (4.15.0)
Requirement already satisfied: virtualenv>=20.29.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==2.20.0) (20.31.1)
Requirement already satisfied: distlib<1,>=0.3.7 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from virtualenv>=20.29.1->tox->marshmallow==2.20.0) (0.3.9)
Installing collected packages: marshmallow
Attempting uninstall: marshmallow
Found existing installation: marshmallow 2.20.0
Uninstalling marshmallow-2.20.0:
Successfully uninstalled marshmallow-2.20.0
DEPRECATION: Legacy editable install of marshmallow[dev]==2.20.0 from file:///testbed (setup.py develop) is deprecated. pip 25.3 will enforce this behaviour change. A possible replacement is to add a pyproject.toml or enable --use-pep517, and use setuptools >= 64. If the resulting installation is not behaving as expected, try using --config-settings editable_mode=compat. Please consult the setuptools documentation for more information. Discussion can be found at https://github.com/pypa/pip/issues/11457
Running setup.py develop for marshmallow
Successfully installed marshmallow
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
+ git checkout 2be2d83a1a9a6d3d9b85804f3ab545cecc409bb0 tests/test_marshalling.py
Updated 0 paths from a3815758
+ git apply -v -
Checking patch tests/test_marshalling.py...
Applied patch tests/test_marshalling.py cleanly.
+ : '>>>>> Start Test Output'
+ pytest -rA tests/test_marshalling.py
ERROR: while parsing the following warning configuration:
ignore::marshmallow.warnings.ChangedInMarshmallow3Warning
This error occurred:
Traceback (most recent call last):
File "/opt/miniconda3/envs/testbed/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1918, in parse_warning_filter
category: type[Warning] = _resolve_warning_category(category_)
File "/opt/miniconda3/envs/testbed/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1956, in _resolve_warning_category
m = __import__(module, None, None, [klass])
File "/testbed/src/marshmallow/__init__.py", line 4, in <module>
from marshmallow.schema import (
ImportError: cannot import name 'Schema' from 'marshmallow.schema' (/testbed/src/marshmallow/schema.py)
+ : '>>>>> End Test Output'
+ git checkout 2be2d83a1a9a6d3d9b85804f3ab545cecc409bb0 tests/test_marshalling.py
Updated 1 path from a3815758
Reference output
diff --git a/src/marshmallow/schema.py b/src/marshmallow/schema.py
--- a/src/marshmallow/schema.py
+++ b/src/marshmallow/schema.py
@@ -877,7 +877,7 @@ def _invoke_field_validators(self, unmarshal, data, many):
for idx, item in enumerate(data):
try:
value = item[field_obj.attribute or field_name]
- except KeyError:
+ except (KeyError, TypeError):
pass
else:
validated_value = unmarshal.call_and_store(
@@ -892,7 +892,7 @@ def _invoke_field_validators(self, unmarshal, data, many):
else:
try:
value = data[field_obj.attribute or field_name]
- except KeyError:
+ except (KeyError, TypeError):
pass
else:
validated_value = unmarshal.call_and_store(
marshmallow-code__marshmallow-1359
marshmallow-code/marshmallow
Score
0%
Outcome
Did not pass
Task cost
$0.36
Duration
172 s
Summary
Did not pass
Not resolved by official SWE-bench grading. Fail-to-pass: 0%. Pass-to-pass: 0%.
View task details
Run metadata
Benchmark
swe_bench/lite/dev
Model
claude-sonnet-4-5-20250929
Started
Mar 31, 2026, 2:29 AM UTC
Completed
Mar 31, 2026, 2:32 AM UTC
Sandbox
32d27182-ed3e-4c37-8084-d3b787d1e7e1
Tokens
In 32,694 / out 8,192
F2P / P2P
0% / 0%
Passed benchmark
No
Completed
/lib/python3.9/site-packages (from virtualenv>=15.2->pre-commit~=1.17->marshmallow==3.0.0) (0.3.9)
Requirement already satisfied: filelock<4,>=3.12.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from virtualenv>=15.2->pre-commit~=1.17->marshmallow==3.0.0) (3.18.0)
Requirement already satisfied: platformdirs<5,>=3.9.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from virtualenv>=15.2->pre-commit~=1.17->marshmallow==3.0.0) (4.3.7)
Requirement already satisfied: exceptiongroup>=1.0.0rc8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (1.2.2)
Requirement already satisfied: iniconfig in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (2.1.0)
Requirement already satisfied: packaging in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (25.0)
Requirement already satisfied: pluggy<2,>=1.5 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (1.5.0)
Requirement already satisfied: tomli>=1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (2.2.1)
Requirement already satisfied: cachetools>=5.5.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (5.5.2)
Requirement already satisfied: chardet>=5.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (5.2.0)
Requirement already satisfied: colorama>=0.4.6 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (0.4.6)
Requirement already satisfied: pyproject-api>=1.8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (1.9.0)
Requirement already satisfied: typing-extensions>=4.12.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (4.15.0)
Building wheels for collected packages: marshmallow
Building editable for marshmallow (pyproject.toml): started
Building editable for marshmallow (pyproject.toml): finished with status 'done'
Created wheel for marshmallow: filename=marshmallow-3.0.0-0.editable-py2.py3-none-any.whl size=4552 sha256=8eddfa9b407b64bf895cfaf6154f12caaf373b314c5b198e9399f03138ee3006
Stored in directory: /tmp/pip-ephem-wheel-cache-g0rhbmkj/wheels/7d/66/67/70d1ee2124ccf21d601c352e25cdca10f611f7c8b3f9ffb9e4
Successfully built marshmallow
Installing collected packages: marshmallow
Attempting uninstall: marshmallow
Found existing installation: marshmallow 3.0.0
Uninstalling marshmallow-3.0.0:
Successfully uninstalled marshmallow-3.0.0
Successfully installed marshmallow-3.0.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
+ git checkout b40a0f4e33823e6d0f341f7e8684e359a99060d1 tests/test_fields.py
Updated 0 paths from 56ab4168
+ git apply -v -
Checking patch tests/test_fields.py...
Applied patch tests/test_fields.py cleanly.
+ : '>>>>> Start Test Output'
+ pytest -rA tests/test_fields.py
ImportError while loading conftest '/testbed/tests/conftest.py'.
tests/conftest.py:4: in <module>
from tests.base import User, UserSchema, Blog
tests/base.py:9: in <module>
from marshmallow import Schema, fields, post_load, validate, missing
src/marshmallow/__init__.py:1: in <module>
from marshmallow.schema import Schema, SchemaOpts
E File "/testbed/src/marshmallow/schema.py", line 1
E or getattr(self.root.opts, self.SCHEMA_OPTS_VAR_NAME)
E ^
E SyntaxError: invalid syntax
+ : '>>>>> End Test Output'
+ git checkout b40a0f4e33823e6d0f341f7e8684e359a99060d1 tests/test_fields.py
Updated 1 path from 56ab4168
Open in DaytonaBenchmark context
Task input
3.0: DateTime fields cannot be used as inner field for List or Tuple fields
Between releases 3.0.0rc8 and 3.0.0rc9, `DateTime` fields have started throwing an error when being instantiated as inner fields of container fields like `List` or `Tuple`. The snippet below works in <=3.0.0rc8 and throws the error below in >=3.0.0rc9 (and, worryingly, 3.0.0):
```python
from marshmallow import fields, Schema
class MySchema(Schema):
times = fields.List(fields.DateTime())
s = MySchema()
```
Traceback:
```
Traceback (most recent call last):
File "test-mm.py", line 8, in <module>
s = MySchema()
File "/Users/victor/.pyenv/versions/marshmallow/lib/python3.6/site-packages/marshmallow/schema.py", line 383, in __init__
self.fields = self._init_fields()
File "/Users/victor/.pyenv/versions/marshmallow/lib/python3.6/site-packages/marshmallow/schema.py", line 913, in _init_fields
self._bind_field(field_name, field_obj)
File "/Users/victor/.pyenv/versions/marshmallow/lib/python3.6/site-packages/marshmallow/schema.py", line 969, in _bind_field
field_obj._bind_to_schema(field_name, self)
File "/Users/victor/.pyenv/versions/marshmallow/lib/python3.6/site-packages/marshmallow/fields.py", line 636, in _bind_to_schema
self.inner._bind_to_schema(field_name, self)
File "/Users/victor/.pyenv/versions/marshmallow/lib/python3.6/site-packages/marshmallow/fields.py", line 1117, in _bind_to_schema
or getattr(schema.opts, self.SCHEMA_OPTS_VAR_NAME)
AttributeError: 'List' object has no attribute 'opts'
```
It seems like it's treating the parent field as a Schema without checking that it is indeed a schema, so the `schema.opts` statement fails as fields don't have an `opts` attribute.
Fix tests
tests/test_fields.py::TestParentAndName::test_datetime_list_inner_format
Regression tests
tests/test_fields.py::test_field_aliases[Integer-Integer] tests/test_fields.py::test_field_aliases[String-String] tests/test_fields.py::test_field_aliases[Boolean-Boolean] tests/test_fields.py::test_field_aliases[Url-Url] tests/test_fields.py::TestField::test_repr tests/test_fields.py::TestField::test_error_raised_if_uncallable_validator_passed tests/test_fields.py::TestField::test_error_raised_if_missing_is_set_on_required_field tests/test_fields.py::TestField::test_custom_field_receives_attr_and_obj tests/test_fields.py::TestField::test_custom_field_receives_data_key_if_set tests/test_fields.py::TestField::test_custom_field_follows_data_key_if_set tests/test_fields.py::TestParentAndName::test_simple_field_parent_and_name tests/test_fields.py::TestParentAndName::test_unbound_field_root_returns_none tests/test_fields.py::TestParentAndName::test_list_field_inner_parent_and_name tests/test_fields.py::TestParentAndName::test_tuple_field_inner_parent_and_name tests/test_fields.py::TestParentAndName::test_mapping_field_inner_parent_and_name tests/test_fields.py::TestParentAndName::test_simple_field_root tests/test_fields.py::TestParentAndName::test_list_field_inner_root tests/test_fields.py::TestParentAndName::test_tuple_field_inner_root tests/test_fields.py::TestParentAndName::test_list_root_inheritance tests/test_fields.py::TestParentAndName::test_dict_root_inheritance tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[String] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Integer] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Boolean] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Float] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Number] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[DateTime] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Time] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Date] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[TimeDelta] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Dict] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Url] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Email] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[UUID] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Decimal] tests/test_fields.py::TestErrorMessages::test_default_error_messages_get_merged_with_parent_error_messages_cstm_msg tests/test_fields.py::TestErrorMessages::test_default_error_messages_get_merged_with_parent_error_messages tests/test_fields.py::TestErrorMessages::test_make_error[required-Missing tests/test_fields.py::TestErrorMessages::test_make_error[null-Field tests/test_fields.py::TestErrorMessages::test_make_error[custom-Custom tests/test_fields.py::TestErrorMessages::test_make_error[validator_failed-Invalid tests/test_fields.py::TestErrorMessages::test_fail[required-Missing tests/test_fields.py::TestErrorMessages::test_fail[null-Field tests/test_fields.py::TestErrorMessages::test_fail[custom-Custom tests/test_fields.py::TestErrorMessages::test_fail[validator_failed-Invalid tests/test_fields.py::TestErrorMessages::test_make_error_key_doesnt_exist tests/test_fields.py::TestNestedField::test_nested_only_and_exclude_as_string[only] tests/test_fields.py::TestNestedField::test_nested_only_and_exclude_as_string[exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[None-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[None-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[None-raise] tests/test_fields.py::TestNestedField::test_nested_unknown_override[exclude-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[exclude-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[exclude-raise] tests/test_fields.py::TestNestedField::test_nested_unknown_override[include-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[include-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[include-raise] tests/test_fields.py::TestNestedField::test_nested_unknown_override[raise-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[raise-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[raise-raise] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[only] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[exclude] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[dump_only] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[load_only] tests/test_fields.py::TestListNested::test_list_nested_only_and_exclude_merged_with_nested[only-expected0] tests/test_fields.py::TestListNested::test_list_nested_only_and_exclude_merged_with_nested[exclude-expected1] tests/test_fields.py::TestListNested::test_list_nested_partial_propagated_to_nested tests/test_fields.py::TestTupleNested::test_tuple_nested_only_exclude_dump_only_load_only_propagated_to_nested[dump_only] tests/test_fields.py::TestTupleNested::test_tuple_nested_only_exclude_dump_only_load_only_propagated_to_nested[load_only] tests/test_fields.py::TestTupleNested::test_tuple_nested_partial_propagated_to_nested tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[only] tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[exclude] tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[dump_only] tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[load_only] tests/test_fields.py::TestDictNested::test_dict_nested_only_and_exclude_merged_with_nested[only-expected0] tests/test_fields.py::TestDictNested::test_dict_nested_only_and_exclude_merged_with_nested[exclude-expected1] tests/test_fields.py::TestDictNested::test_dict_nested_partial_propagated_to_nested
Execution
Scorer detail
Not resolved by official SWE-bench grading. Fail-to-pass: 0%. Pass-to-pass: 0%.
Patch text
diff --git a/src/marshmallow/schema.py b/src/marshmallow/schema.py
index 520ffb44..a683c166 100644
--- a/src/marshmallow/schema.py
+++ b/src/marshmallow/schema.py
@@ -1,1140 +1 @@
-"""The :class:`Schema` class, including its metaclass and options (class Meta)."""
-from collections import defaultdict, OrderedDict
-from collections.abc import Mapping
-from functools import lru_cache
-import datetime as dt
-import uuid
-import decimal
-import copy
-import inspect
-import json
-import typing
-import warnings
-
-from marshmallow import base, fields as ma_fields, class_registry
-from marshmallow.error_store import ErrorStore
-from marshmallow.exceptions import ValidationError, StringNotCollectionError
-from marshmallow.orderedset import OrderedSet
-from marshmallow.decorators import (
- POST_DUMP,
- POST_LOAD,
- PRE_DUMP,
- PRE_LOAD,
- VALIDATES,
- VALIDATES_SCHEMA,
-)
-from marshmallow.utils import (
- RAISE,
- EXCLUDE,
- INCLUDE,
- missing,
- set_value,
- get_value,
- is_collection,
- is_instance_or_subclass,
- is_iterable_but_not_string,
-)
-
-
-def _get_fields(attrs, field_class, pop=False, ordered=False):
- """Get fields from a class. If ordered=True, fields will sorted by creation index.
-
- :param attrs: Mapping of class attributes
- :param type field_class: Base field class
- :param bool pop: Remove matching fields
- """
- fields = [
- (field_name, field_value)
- for field_name, field_value in attrs.items()
- if is_instance_or_subclass(field_value, field_class)
- ]
- if pop:
- for field_name, _ in fields:
- del attrs[field_name]
- if ordered:
- fields.sort(key=lambda pair: pair[1]._creation_index)
- return fields
-
-
-# This function allows Schemas to inherit from non-Schema classes and ensures
-# inheritance according to the MRO
-def _get_fields_by_mro(klass, field_class, ordered=False):
- """Collect fields from a class, following its method resolution order. The
- class itself is excluded from the search; only its parents are checked. Get
- fields from ``_declared_fields`` if available, else use ``__dict__``.
-
- :param type klass: Class whose fields to retrieve
- :param type field_class: Base field class
- """
- mro = inspect.getmro(klass)
- # Loop over mro in reverse to maintain correct order of fields
- return sum(
- (
- _get_fields(
- getattr(base, "_declared_fields", base.__dict__),
- field_class,
- ordered=ordered,
- )
- for base in mro[:0:-1]
- ),
- [],
- )
-
-
-class SchemaMeta(type):
- """Metaclass for the Schema class. Binds the declared fields to
- a ``_declared_fields`` attribute, which is a dictionary mapping attribute
- names to field objects. Also sets the ``opts`` class attribute, which is
- the Schema class's ``class Meta`` options.
- """
-
- def __new__(mcs, name, bases, attrs):
- meta = attrs.get("Meta")
- ordered = getattr(meta, "ordered", False)
- if not ordered:
- # Inherit 'ordered' option
- # Warning: We loop through bases instead of MRO because we don't
- # yet have access to the class object
- # (i.e. can't call super before we have fields)
- for base_ in bases:
- if hasattr(base_, "Meta") and hasattr(base_.Meta, "ordered"):
- ordered = base_.Meta.ordered
- break
- else:
- ordered = False
- cls_fields = _get_fields(attrs, base.FieldABC, pop=True, ordered=ordered)
- klass = super().__new__(mcs, name, bases, attrs)
- inherited_fields = _get_fields_by_mro(klass, base.FieldABC, ordered=ordered)
-
- meta = klass.Meta
- # Set klass.opts in __new__ rather than __init__ so that it is accessible in
- # get_declared_fields
- klass.opts = klass.OPTIONS_CLASS(meta, ordered=ordered)
- # Add fields specified in the `include` class Meta option
- cls_fields += list(klass.opts.include.items())
-
- dict_cls = OrderedDict if ordered else dict
- # Assign _declared_fields on class
- klass._declared_fields = mcs.get_declared_fields(
- klass=klass,
- cls_fields=cls_fields,
- inherited_fields=inherited_fields,
- dict_cls=dict_cls,
- )
- return klass
-
- @classmethod
- def get_declared_fields(mcs, klass, cls_fields, inherited_fields, dict_cls):
- """Returns a dictionary of field_name => `Field` pairs declard on the class.
- This is exposed mainly so that plugins can add additional fields, e.g. fields
- computed from class Meta options.
-
- :param type klass: The class object.
- :param list cls_fields: The fields declared on the class, including those added
- by the ``include`` class Meta option.
- :param list inherited_fields: Inherited fields.
- :param type dict_class: Either `dict` or `OrderedDict`, depending on the whether
- the user specified `ordered=True`.
- """
- return dict_cls(inherited_fields + cls_fields)
-
- def __init__(cls, name, bases, attrs):
- super().__init__(cls, bases, attrs)
- if name and cls.opts.register:
- class_registry.register(name, cls)
- cls._hooks = cls.resolve_hooks()
-
- def resolve_hooks(cls):
- """Add in the decorated processors
-
- By doing this after constructing the class, we let standard inheritance
- do all the hard work.
- """
- mro = inspect.getmro(cls)
-
- hooks = defaultdict(list)
-
- for attr_name in dir(cls):
- # Need to look up the actual descriptor, not whatever might be
- # bound to the class. This needs to come from the __dict__ of the
- # declaring class.
- for parent in mro:
- try:
- attr = parent.__dict__[attr_name]
- except KeyError:
- continue
- else:
- break
- else:
- # In case we didn't find the attribute and didn't break above.
- # We should never hit this - it's just here for completeness
- # to exclude the possibility of attr being undefined.
- continue
-
- try:
- hook_config = attr.__marshmallow_hook__
- except AttributeError:
- pass
- else:
- for key in hook_config.keys():
- # Use name here so we can get the bound method later, in
- # case the processor was a descriptor or something.
- hooks[key].append(attr_name)
-
- return hooks
-
-
-class SchemaOpts:
- """class Meta options for the :class:`Schema`. Defines defaults."""
-
- def __init__(self, meta, ordered=False):
- self.fields = getattr(meta, "fields", ())
- if not isinstance(self.fields, (list, tuple)):
- raise ValueError("`fields` option must be a list or tuple.")
- self.additional = getattr(meta, "additional", ())
- if not isinstance(self.additional, (list, tuple)):
- raise ValueError("`additional` option must be a list or tuple.")
- if self.fields and self.additional:
- raise ValueError(
- "Cannot set both `fields` and `additional` options"
- " for the same Schema."
- )
- self.exclude = getattr(meta, "exclude", ())
- if not isinstance(self.exclude, (list, tuple)):
- raise ValueError("`exclude` must be a list or tuple.")
- self.dateformat = getattr(meta, "dateformat", None)
- self.datetimeformat = getattr(meta, "datetimeformat", None)
- if hasattr(meta, "json_module"):
- warnings.warn(
- "The json_module class Meta option is deprecated. Use render_module instead.",
- DeprecationWarning,
- )
- render_module = getattr(meta, "json_module", json)
- else:
- render_module = json
- self.render_module = getattr(meta, "render_module", render_module)
- self.ordered = getattr(meta, "ordered", ordered)
- self.index_errors = getattr(meta, "index_errors", True)
- self.include = getattr(meta, "include", {})
- self.load_only = getattr(meta, "load_only", ())
- self.dump_only = getattr(meta, "dump_only", ())
- self.unknown = getattr(meta, "unknown", RAISE)
- self.register = getattr(meta, "register", True)
-
-
-class BaseSchema(base.SchemaABC):
- """Base schema class with which to define custom schemas.
-
- Example usage:
-
- .. code-block:: python
-
- import datetime as dt
- from marshmallow import Schema, fields
-
- class Album:
- def __init__(self, title, release_date):
- self.title = title
- self.release_date = release_date
-
- class AlbumSchema(Schema):
- title = fields.Str()
- release_date = fields.Date()
-
- # Or, equivalently
- class AlbumSchema2(Schema):
- class Meta:
- fields = ("title", "release_date")
-
- album = Album("Beggars Banquet", dt.date(1968, 12, 6))
- schema = AlbumSchema()
- data = schema.dump(album)
- data # {'release_date': '1968-12-06', 'title': 'Beggars Banquet'}
-
- :param tuple|list only: Whitelist of the declared fields to select when
- instantiating the Schema. If None, all fields are used. Nested fields
- can be represented with dot delimiters.
- :param tuple|list exclude: Blacklist of the declared fields to exclude
- when instantiating the Schema. If a field appears in both `only` and
- `exclude`, it is not used. Nested fields can be represented with dot
- delimiters.
- :param bool many: Should be set to `True` if ``obj`` is a collection
- so that the object will be serialized to a list.
- :param dict context: Optional context passed to :class:`fields.Method` and
- :class:`fields.Function` fields.
- :param tuple|list load_only: Fields to skip during serialization (write-only fields)
- :param tuple|list dump_only: Fields to skip during deserialization (read-only fields)
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
-
- .. versionchanged:: 3.0.0
- `prefix` parameter removed.
-
- .. versionchanged:: 2.0.0
- `__validators__`, `__preprocessors__`, and `__data_handlers__` are removed in favor of
- `marshmallow.decorators.validates_schema`,
- `marshmallow.decorators.pre_load` and `marshmallow.decorators.post_dump`.
- `__accessor__` and `__error_handler__` are deprecated. Implement the
- `handle_error` and `get_attribute` methods instead.
- """
-
- TYPE_MAPPING = {
- str: ma_fields.String,
- bytes: ma_fields.String,
- dt.datetime: ma_fields.DateTime,
- float: ma_fields.Float,
- bool: ma_fields.Boolean,
- tuple: ma_fields.Raw,
- list: ma_fields.Raw,
- set: ma_fields.Raw,
- int: ma_fields.Integer,
- uuid.UUID: ma_fields.UUID,
- dt.time: ma_fields.Time,
- dt.date: ma_fields.Date,
- dt.timedelta: ma_fields.TimeDelta,
- decimal.Decimal: ma_fields.Decimal,
- }
- #: Overrides for default schema-level error messages
- error_messages = {}
-
- _default_error_messages = {
- "type": "Invalid input type.",
- "unknown": "Unknown field.",
- }
-
- OPTIONS_CLASS = SchemaOpts
-
- class Meta:
- """Options object for a Schema.
-
- Example usage: ::
-
- class Meta:
- fields = ("id", "email", "date_created")
- exclude = ("password", "secret_attribute")
-
- Available options:
-
- - ``fields``: Tuple or list of fields to include in the serialized result.
- - ``additional``: Tuple or list of fields to include *in addition* to the
- explicitly declared fields. ``additional`` and ``fields`` are
- mutually-exclusive options.
- - ``include``: Dictionary of additional fields to include in the schema. It is
- usually better to define fields as class variables, but you may need to
- use this option, e.g., if your fields are Python keywords. May be an
- `OrderedDict`.
- - ``exclude``: Tuple or list of fields to exclude in the serialized result.
- Nested fields can be represented with dot delimiters.
- - ``dateformat``: Default format for `Date <fields.Date>` fields.
- - ``datetimeformat``: Default format for `DateTime <fields.DateTime>` fields.
- - ``render_module``: Module to use for `loads <Schema.loads>` and `dumps <Schema.dumps>`.
- Defaults to `json` from the standard library.
- - ``ordered``: If `True`, order serialization output according to the
- order in which fields were declared. Output of `Schema.dump` will be a
- `collections.OrderedDict`.
- - ``index_errors``: If `True`, errors dictionaries will include the index
- of invalid items in a collection.
- - ``load_only``: Tuple or list of fields to exclude from serialized results.
- - ``dump_only``: Tuple or list of fields to exclude from deserialization
- - ``unknown``: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- - ``register``: Whether to register the `Schema` with marshmallow's internal
- class registry. Must be `True` if you intend to refer to this `Schema`
- by class name in `Nested` fields. Only set this to `False` when memory
- usage is critical. Defaults to `True`.
- """
-
- pass
-
- def __init__(
- self,
- *,
- only=None,
- exclude=(),
- many=False,
- context=None,
- load_only=(),
- dump_only=(),
- partial=False,
- unknown=None
- ):
- # Raise error if only or exclude is passed as string, not list of strings
- if only is not None and not is_collection(only):
- raise StringNotCollectionError('"only" should be a list of strings')
- if exclude is not None and not is_collection(exclude):
- raise StringNotCollectionError('"exclude" should be a list of strings')
- # copy declared fields from metaclass
- self.declared_fields = copy.deepcopy(self._declared_fields)
- self.many = many
- self.only = only
- self.exclude = set(self.opts.exclude) | set(exclude)
- self.ordered = self.opts.ordered
- self.load_only = set(load_only) or set(self.opts.load_only)
- self.dump_only = set(dump_only) or set(self.opts.dump_only)
- self.partial = partial
- self.unknown = unknown or self.opts.unknown
- self.context = context or {}
- self._normalize_nested_options()
- #: Dictionary mapping field_names -> :class:`Field` objects
- self.fields = self._init_fields()
- self.dump_fields, self.load_fields = self.dict_class(), self.dict_class()
- for field_name, field_obj in self.fields.items():
- if field_obj.load_only:
- self.load_fields[field_name] = field_obj
- elif field_obj.dump_only:
- self.dump_fields[field_name] = field_obj
- else:
- self.load_fields[field_name] = field_obj
- self.dump_fields[field_name] = field_obj
- messages = {}
- messages.update(self._default_error_messages)
- for cls in reversed(self.__class__.__mro__):
- messages.update(getattr(cls, "error_messages", {}))
- messages.update(self.error_messages or {})
- self.error_messages = messages
-
- def __repr__(self):
- return "<{ClassName}(many={self.many})>".format(
- ClassName=self.__class__.__name__, self=self
- )
-
- @property
- def dict_class(self):
- return OrderedDict if self.ordered else dict
-
- @property
- def set_class(self):
- return OrderedSet if self.ordered else set
-
- @classmethod
- def from_dict(
- cls, fields: typing.Dict[str, ma_fields.Field], *, name: str = "GeneratedSchema"
- ) -> typing.Type["Schema"]:
- """Generate a `Schema` class given a dictionary of fields.
-
- .. code-block:: python
-
- from marshmallow import Schema, fields
-
- PersonSchema = Schema.from_dict({"name": fields.Str()})
- print(PersonSchema().load({"name": "David"})) # => {'name': 'David'}
-
- Generated schemas are not added to the class registry and therefore cannot
- be referred to by name in `Nested` fields.
-
- :param dict fields: Dictionary mapping field names to field instances.
- :param str name: Optional name for the class, which will appear in
- the ``repr`` for the class.
-
- .. versionadded:: 3.0.0
- """
- attrs = fields.copy()
- attrs["Meta"] = type(
- "GeneratedMeta", (getattr(cls, "Meta", object),), {"register": False}
- )
- schema_cls = type(name, (cls,), attrs)
- return schema_cls
-
- ##### Override-able methods #####
-
- def handle_error(self, error, data, *, many, **kwargs):
- """Custom error handler function for the schema.
-
- :param ValidationError error: The `ValidationError` raised during (de)serialization.
- :param data: The original input data.
- :param bool many: Value of ``many`` on dump or load.
- :param bool partial: Value of ``partial`` on load.
-
- .. versionadded:: 2.0.0
-
- .. versionchanged:: 3.0.0rc9
- Receives `many` and `partial` (on deserialization) as keyword arguments.
- """
- pass
-
- def get_attribute(self, obj, attr, default):
- """Defines how to pull values from an object to serialize.
-
- .. versionadded:: 2.0.0
-
- .. versionchanged:: 3.0.0a1
- Changed position of ``obj`` and ``attr``.
- """
- return get_value(obj, attr, default)
-
- ##### Serialization/Deserialization API #####
-
- @staticmethod
- def _call_and_store(getter_func, data, *, field_name, error_store, index=None):
- """Call ``getter_func`` with ``data`` as its argument, and store any `ValidationErrors`.
-
- :param callable getter_func: Function for getting the serialized/deserialized
- value from ``data``.
- :param data: The data passed to ``getter_func``.
- :param str field_name: Field name.
- :param int index: Index of the item being validated, if validating a collection,
- otherwise `None`.
- """
- try:
- value = getter_func(data)
- except ValidationError as error:
- error_store.store_error(error.messages, field_name, index=index)
- # When a Nested field fails validation, the marshalled data is stored
- # on the ValidationError's valid_data attribute
- return error.valid_data or missing
- return value
-
- def _serialize(self, obj, *, many=False):
- """Serialize ``obj``.
-
- :param obj: The object(s) to serialize.
- :param bool many: `True` if ``data`` should be serialized as a collection.
- :return: A dictionary of the serialized data
-
- .. versionchanged:: 1.0.0
- Renamed from ``marshal``.
- """
- if many and obj is not None:
- return [self._serialize(d, many=False) for d in obj]
- ret = self.dict_class()
- for attr_name, field_obj in self.dump_fields.items():
- value = field_obj.serialize(attr_name, obj, accessor=self.get_attribute)
- if value is missing:
- continue
- key = field_obj.data_key or attr_name
- ret[key] = value
- return ret
-
- def dump(self, obj, *, many=None):
- """Serialize an object to native Python data types according to this
- Schema's fields.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :return: A dict of serialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the serialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if ``obj`` is invalid.
- .. versionchanged:: 3.0.0rc9
- Validation no longer occurs upon serialization.
- """
- many = self.many if many is None else bool(many)
- if many and is_iterable_but_not_string(obj):
- obj = list(obj)
-
- if self._has_processors(PRE_DUMP):
- processed_obj = self._invoke_dump_processors(
- PRE_DUMP, obj, many=many, original_data=obj
- )
- else:
- processed_obj = obj
-
- result = self._serialize(processed_obj, many=many)
-
- if self._has_processors(POST_DUMP):
- result = self._invoke_dump_processors(
- POST_DUMP, result, many=many, original_data=obj
- )
-
- return result
-
- def dumps(self, obj, *args, many=None, **kwargs):
- """Same as :meth:`dump`, except return a JSON-encoded string.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :return: A ``json`` string
- :rtype: str
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the serialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if ``obj`` is invalid.
- """
- serialized = self.dump(obj, many=many)
- return self.opts.render_module.dumps(serialized, *args, **kwargs)
-
- def _deserialize(
- self, data, *, error_store, many=False, partial=False, unknown=RAISE, index=None
- ):
- """Deserialize ``data``.
-
- :param dict data: The data to deserialize.
- :param ErrorStore error_store: Structure to store errors.
- :param bool many: `True` if ``data`` should be deserialized as a collection.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- :param int index: Index of the item being serialized (for storing errors) if
- serializing a collection, otherwise `None`.
- :return: A dictionary of the deserialized data.
- """
- index_errors = self.opts.index_errors
- index = index if index_errors else None
- if many:
- if not is_collection(data):
- error_store.store_error([self.error_messages["type"]], index=index)
- ret = []
- else:
- ret = [
- self._deserialize(
- d,
- error_store=error_store,
- many=False,
- partial=partial,
- unknown=unknown,
- index=idx,
- )
- for idx, d in enumerate(data)
- ]
- return ret
- ret = self.dict_class()
- # Check data is a dict
- if not isinstance(data, Mapping):
- error_store.store_error([self.error_messages["type"]], index=index)
- else:
- partial_is_collection = is_collection(partial)
- for attr_name, field_obj in self.load_fields.items():
- field_name = field_obj.data_key or attr_name
- raw_value = data.get(field_name, missing)
- if raw_value is missing:
- # Ignore missing field if we're allowed to.
- if partial is True or (
- partial_is_collection and attr_name in partial
- ):
- continue
- d_kwargs = {}
- # Allow partial loading of nested schemas.
- if partial_is_collection:
- prefix = field_name + "."
- len_prefix = len(prefix)
- sub_partial = [
- f[len_prefix:] for f in partial if f.startswith(prefix)
- ]
- d_kwargs["partial"] = sub_partial
- else:
- d_kwargs["partial"] = partial
- getter = lambda val: field_obj.deserialize(
- val, field_name, data, **d_kwargs
- )
- value = self._call_and_store(
- getter_func=getter,
- data=raw_value,
- field_name=field_name,
- error_store=error_store,
- index=index,
- )
- if value is not missing:
- key = field_obj.attribute or attr_name
- set_value(ret, key, value)
- if unknown != EXCLUDE:
- fields = {
- field_obj.data_key or field_name
- for field_name, field_obj in self.load_fields.items()
- }
- for key in set(data) - fields:
- value = data[key]
- if unknown == INCLUDE:
- set_value(ret, key, value)
- elif unknown == RAISE:
- error_store.store_error(
- [self.error_messages["unknown"]],
- key,
- (index if index_errors else None),
- )
- return ret
-
- def load(self, data, *, many=None, partial=None, unknown=None):
- """Deserialize a data structure to an object defined by this Schema's fields.
-
- :param dict data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :return: A dict of deserialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the deserialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if invalid data are passed.
- """
- return self._do_load(
- data, many=many, partial=partial, unknown=unknown, postprocess=True
- )
-
- def loads(self, json_data, *, many=None, partial=None, unknown=None, **kwargs):
- """Same as :meth:`load`, except it takes a JSON string as input.
-
- :param str json_data: A JSON string of the data to deserialize.
- :param bool many: Whether to deserialize `obj` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :return: A dict of deserialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the deserialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if invalid data are passed.
- """
- data = self.opts.render_module.loads(json_data, **kwargs)
- return self.load(data, many=many, partial=partial, unknown=unknown)
-
- def _run_validator(
- self,
- validator_func,
- output,
- *,
- original_data,
- error_store,
- many,
- partial,
- pass_original,
- index=None
- ):
- try:
- if pass_original: # Pass original, raw data (before unmarshalling)
- validator_func(output, original_data, partial=partial, many=many)
- else:
- validator_func(output, partial=partial, many=many)
- except ValidationError as err:
- error_store.store_error(err.messages, err.field_name, index=index)
-
- def validate(self, data, *, many=None, partial=None):
- """Validate `data` against the schema, returning a dictionary of
- validation errors.
-
- :param dict data: The data to validate.
- :param bool many: Whether to validate `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :return: A dictionary of validation errors.
- :rtype: dict
-
- .. versionadded:: 1.1.0
- """
- try:
- self._do_load(data, many=many, partial=partial, postprocess=False)
- except ValidationError as exc:
- return exc.messages
- return {}
-
- ##### Private Helpers #####
-
- def _do_load(
- self, data, *, many=None, partial=None, unknown=None, postprocess=True
- ):
- """Deserialize `data`, returning the deserialized result.
-
- :param data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to validate required fields. If its
- value is an iterable, only fields listed in that iterable will be
- ignored will be allowed missing. If `True`, all fields will be allowed missing.
- If `None`, the value for `self.partial` is used.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :param bool postprocess: Whether to run post_load methods..
- :return: A dict of deserialized data
- :rtype: dict
- """
- error_store = ErrorStore()
- errors = {}
- many = self.many if many is None else bool(many)
- unknown = unknown or self.unknown
- if partial is None:
- partial = self.partial
- # Run preprocessors
- if self._has_processors(PRE_LOAD):
- try:
- processed_data = self._invoke_load_processors(
- PRE_LOAD, data, many=many, original_data=data, partial=partial
- )
- except ValidationError as err:
- errors = err.normalized_messages()
- result = None
- else:
- processed_data = data
- if not errors:
- # Deserialize data
- result = self._deserialize(
- processed_data,
- error_store=error_store,
- many=many,
- partial=partial,
- unknown=unknown,
- )
- # Run field-level validation
- self._invoke_field_validators(
- error_store=error_store, data=result, many=many
- )
- # Run schema-level validation
- if self._has_processors(VALIDATES_SCHEMA):
- field_errors = bool(error_store.errors)
- self._invoke_schema_validators(
- error_store=error_store,
- pass_many=True,
- data=result,
- original_data=data,
- many=many,
- partial=partial,
- field_errors=field_errors,
- )
- self._invoke_schema_validators(
- error_store=error_store,
- pass_many=False,
- data=result,
- original_data=data,
- many=many,
- partial=partial,
- field_errors=field_errors,
- )
- errors = error_store.errors
- # Run post processors
- if not errors and postprocess and self._has_processors(POST_LOAD):
- try:
- result = self._invoke_load_processors(
- POST_LOAD,
- result,
- many=many,
- original_data=data,
- partial=partial,
- )
- except ValidationError as err:
- errors = err.normalized_messages()
- if errors:
- exc = ValidationError(errors, data=data, valid_data=result)
- self.handle_error(exc, data, many=many, partial=partial)
- raise exc
-
- return result
-
- def _normalize_nested_options(self):
- """Apply then flatten nested schema options"""
- if self.only is not None:
- # Apply the only option to nested fields.
- self.__apply_nested_option("only", self.only, "intersection")
- # Remove the child field names from the only option.
- self.only = self.set_class([field.split(".", 1)[0] for field in self.only])
- if self.exclude:
- # Apply the exclude option to nested fields.
- self.__apply_nested_option("exclude", self.exclude, "union")
- # Remove the parent field names from the exclude option.
- self.exclude = self.set_class(
- [field for field in self.exclude if "." not in field]
- )
-
- def __apply_nested_option(self, option_name, field_names, set_operation):
- """Apply nested options to nested fields"""
- # Split nested field names on the first dot.
- nested_fields = [name.split(".", 1) for name in field_names if "." in name]
- # Partition the nested field names by parent field.
- nested_options = defaultdict(list)
- for parent, nested_names in nested_fields:
- nested_options[parent].append(nested_names)
- # Apply the nested field options.
- for key, options in iter(nested_options.items()):
- new_options = self.set_class(options)
- original_options = getattr(self.declared_fields[key], option_name, ())
- if original_options:
- if set_operation == "union":
- new_options |= self.set_class(original_options)
- if set_operation == "intersection":
- new_options &= self.set_class(original_options)
- setattr(self.declared_fields[key], option_name, new_options)
-
- def _init_fields(self):
- """Update fields based on schema options."""
- if self.opts.fields:
- available_field_names = self.set_class(self.opts.fields)
- else:
- available_field_names = self.set_class(self.declared_fields.keys())
- if self.opts.additional:
- available_field_names |= self.set_class(self.opts.additional)
-
- invalid_fields = self.set_class()
-
- if self.only is not None:
- # Return only fields specified in only option
- field_names = self.set_class(self.only)
-
- invalid_fields |= field_names - available_field_names
- else:
- field_names = available_field_names
-
- # If "exclude" option or param is specified, remove those fields.
- if self.exclude:
- # Note that this isn't available_field_names, since we want to
- # apply "only" for the actual calculation.
- field_names = field_names - self.exclude
- invalid_fields |= self.exclude - available_field_names
-
- if invalid_fields:
- message = "Invalid fields for {}: {}.".format(self, invalid_fields)
- raise ValueError(message)
-
- fields_dict = self.dict_class()
- for field_name in field_names:
- field_obj = self.declared_fields.get(field_name, ma_fields.Inferred())
- self._bind_field(field_name, field_obj)
- fields_dict[field_name] = field_obj
-
- dump_data_keys = [
- obj.data_key or name
- for name, obj in fields_dict.items()
- if not obj.load_only
- ]
- if len(dump_data_keys) != len(set(dump_data_keys)):
- data_keys_duplicates = {
- x for x in dump_data_keys if dump_data_keys.count(x) > 1
- }
- raise ValueError(
- "The data_key argument for one or more fields collides "
- "with another field's name or data_key argument. "
- "Check the following field names and "
- "data_key arguments: {}".format(list(data_keys_duplicates))
- )
-
- load_attributes = [
- obj.attribute or name
- for name, obj in fields_dict.items()
- if not obj.dump_only
- ]
- if len(load_attributes) != len(set(load_attributes)):
- attributes_duplicates = {
- x for x in load_attributes if load_attributes.count(x) > 1
- }
- raise ValueError(
- "The attribute argument for one or more fields collides "
- "with another field's name or attribute argument. "
- "Check the following field names and "
- "attribute arguments: {}".format(list(attributes_duplicates))
- )
-
- return fields_dict
-
- def on_bind_field(self, field_name, field_obj):
- """Hook to modify a field when it is bound to the `Schema`.
-
- No-op by default.
- """
- return None
-
- def _bind_field(self, field_name, field_obj):
- """Bind field to the schema, setting any necessary attributes on the
- field (e.g. parent and name).
-
- Also set field load_only and dump_only values if field_name was
- specified in ``class Meta``.
- """
- try:
- if field_name in self.load_only:
- field_obj.load_only = True
- if field_name in self.dump_only:
- field_obj.dump_only = True
- field_obj._bind_to_schema(field_name, self)
- self.on_bind_field(field_name, field_obj)
- except TypeError as error:
- # field declared as a class, not an instance
- if isinstance(field_obj, type) and issubclass(field_obj, base.FieldABC):
- msg = (
- 'Field for "{}" must be declared as a '
- "Field instance, not a class. "
- 'Did you mean "fields.{}()"?'.format(field_name, field_obj.__name__)
- )
- raise TypeError(msg) from error
-
- @lru_cache(maxsize=8)
- def _has_processors(self, tag):
- return self._hooks[(tag, True)] or self._hooks[(tag, False)]
-
- def _invoke_dump_processors(self, tag, data, *, many, original_data=None):
- # The pass_many post-dump processors may do things like add an envelope, so
- # invoke those after invoking the non-pass_many processors which will expect
- # to get a list of items.
- data = self._invoke_processors(
- tag, pass_many=False, data=data, many=many, original_data=original_data
- )
- data = self._invoke_processors(
- tag, pass_many=True, data=data, many=many, original_data=original_data
- )
- return data
-
- def _invoke_load_processors(self, tag, data, *, many, original_data, partial):
- # This has to invert the order of the dump processors, so run the pass_many
- # processors first.
- data = self._invoke_processors(
- tag,
- pass_many=True,
- data=data,
- many=many,
- original_data=original_data,
- partial=partial,
- )
- data = self._invoke_processors(
- tag,
- pass_many=False,
- data=data,
- many=many,
- original_data=original_data,
- partial=partial,
- )
- return data
-
- def _invoke_field_validators(self, *, error_store, data, many):
- for attr_name in self._hooks[VALIDATES]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_hook__[VALIDATES]
- field_name = validator_kwargs["field_name"]
-
- try:
- field_obj = self.fields[field_name]
- except KeyError as error:
- if field_name in self.declared_fields:
- continue
- raise ValueError(
- '"{}" field does not exist.'.format(field_name)
- ) from error
-
- if many:
- for idx, item in enumerate(data):
- try:
- value = item[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = self._call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.data_key or field_name,
- error_store=error_store,
- index=(idx if self.opts.index_errors else None),
- )
- if validated_value is missing:
- data[idx].pop(field_name, None)
- else:
- try:
- value = data[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = self._call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.data_key or field_name,
- error_store=error_store,
- )
- if validated_value is missing:
- data.pop(field_name, None)
-
- def _invoke_schema_validators(
- self,
- *,
- error_store,
- pass_many,
- data,
- original_data,
- many,
- partial,
- field_errors=False
- ):
- for attr_name in self._hooks[(VALIDATES_SCHEMA, pass_many)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_hook__[
- (VALIDATES_SCHEMA, pass_many)
- ]
- if field_errors and validator_kwargs["skip_on_field_errors"]:
- continue
- pass_original = validator_kwargs.get("pass_original", False)
-
- if many and not pass_many:
- for idx, (item, orig) in enumerate(zip(data, original_data)):
- self._run_validator(
- validator,
- item,
- original_data=orig,
- error_store=error_store,
- many=many,
- partial=partial,
- index=idx,
- pass_original=pass_original,
- )
- else:
- self._run_validator(
- validator,
- data,
- original_data=original_data,
- error_store=error_store,
- many=many,
- pass_original=pass_original,
- partial=partial,
- )
-
- def _invoke_processors(
- self, tag, *, pass_many, data, many, original_data=None, **kwargs
- ):
- key = (tag, pass_many)
- for attr_name in self._hooks[key]:
- # This will be a bound method.
- processor = getattr(self, attr_name)
-
- processor_kwargs = processor.__marshmallow_hook__[key]
- pass_original = processor_kwargs.get("pass_original", False)
-
- if pass_many:
- if pass_original:
- data = processor(data, original_data, many=many, **kwargs)
- else:
- data = processor(data, many=many, **kwargs)
- elif many:
- if pass_original:
- data = [
- processor(item, original, many=many, **kwargs)
- for item, original in zip(data, original_data)
- ]
- else:
- data = [processor(item, many=many, **kwargs) for item in data]
- else:
- if pass_original:
- data = processor(data, original_data, many=many, **kwargs)
- else:
- data = processor(data, many=many, **kwargs)
- return data
-
-
-class Schema(BaseSchema, metaclass=SchemaMeta):
- __doc__ = BaseSchema.__doc__
+or getattr(self.root.opts, self.SCHEMA_OPTS_VAR_NAME)
\ No newline at end of fileStdout
[anthropic-agent] instance=marshmallow-code__marshmallow-1359 [anthropic-agent] repo=marshmallow-code/marshmallow [anthropic-agent] sandbox=32d27182-ed3e-4c37-8084-d3b787d1e7e1 [anthropic-agent] model=claude-sonnet-4-5-20250929 [anthropic-agent] context_files=6 [anthropic-agent] full_file_context=yes [anthropic-agent] edit_attempts=2
Stderr
[anthropic-agent] Attempt 1: File-rewrite plan contained no files.
Agent output
diff --git a/src/marshmallow/schema.py b/src/marshmallow/schema.py
index 520ffb44..a683c166 100644
--- a/src/marshmallow/schema.py
+++ b/src/marshmallow/schema.py
@@ -1,1140 +1 @@
-"""The :class:`Schema` class, including its metaclass and options (class Meta)."""
-from collections import defaultdict, OrderedDict
-from collections.abc import Mapping
-from functools import lru_cache
-import datetime as dt
-import uuid
-import decimal
-import copy
-import inspect
-import json
-import typing
-import warnings
-
-from marshmallow import base, fields as ma_fields, class_registry
-from marshmallow.error_store import ErrorStore
-from marshmallow.exceptions import ValidationError, StringNotCollectionError
-from marshmallow.orderedset import OrderedSet
-from marshmallow.decorators import (
- POST_DUMP,
- POST_LOAD,
- PRE_DUMP,
- PRE_LOAD,
- VALIDATES,
- VALIDATES_SCHEMA,
-)
-from marshmallow.utils import (
- RAISE,
- EXCLUDE,
- INCLUDE,
- missing,
- set_value,
- get_value,
- is_collection,
- is_instance_or_subclass,
- is_iterable_but_not_string,
-)
-
-
-def _get_fields(attrs, field_class, pop=False, ordered=False):
- """Get fields from a class. If ordered=True, fields will sorted by creation index.
-
- :param attrs: Mapping of class attributes
- :param type field_class: Base field class
- :param bool pop: Remove matching fields
- """
- fields = [
- (field_name, field_value)
- for field_name, field_value in attrs.items()
- if is_instance_or_subclass(field_value, field_class)
- ]
- if pop:
- for field_name, _ in fields:
- del attrs[field_name]
- if ordered:
- fields.sort(key=lambda pair: pair[1]._creation_index)
- return fields
-
-
-# This function allows Schemas to inherit from non-Schema classes and ensures
-# inheritance according to the MRO
-def _get_fields_by_mro(klass, field_class, ordered=False):
- """Collect fields from a class, following its method resolution order. The
- class itself is excluded from the search; only its parents are checked. Get
- fields from ``_declared_fields`` if available, else use ``__dict__``.
-
- :param type klass: Class whose fields to retrieve
- :param type field_class: Base field class
- """
- mro = inspect.getmro(klass)
- # Loop over mro in reverse to maintain correct order of fields
- return sum(
- (
- _get_fields(
- getattr(base, "_declared_fields", base.__dict__),
- field_class,
- ordered=ordered,
- )
- for base in mro[:0:-1]
- ),
- [],
- )
-
-
-class SchemaMeta(type):
- """Metaclass for the Schema class. Binds the declared fields to
- a ``_declared_fields`` attribute, which is a dictionary mapping attribute
- names to field objects. Also sets the ``opts`` class attribute, which is
- the Schema class's ``class Meta`` options.
- """
-
- def __new__(mcs, name, bases, attrs):
- meta = attrs.get("Meta")
- ordered = getattr(meta, "ordered", False)
- if not ordered:
- # Inherit 'ordered' option
- # Warning: We loop through bases instead of MRO because we don't
- # yet have access to the class object
- # (i.e. can't call super before we have fields)
- for base_ in bases:
- if hasattr(base_, "Meta") and hasattr(base_.Meta, "ordered"):
- ordered = base_.Meta.ordered
- break
- else:
- ordered = False
- cls_fields = _get_fields(attrs, base.FieldABC, pop=True, ordered=ordered)
- klass = super().__new__(mcs, name, bases, attrs)
- inherited_fields = _get_fields_by_mro(klass, base.FieldABC, ordered=ordered)
-
- meta = klass.Meta
- # Set klass.opts in __new__ rather than __init__ so that it is accessible in
- # get_declared_fields
- klass.opts = klass.OPTIONS_CLASS(meta, ordered=ordered)
- # Add fields specified in the `include` class Meta option
- cls_fields += list(klass.opts.include.items())
-
- dict_cls = OrderedDict if ordered else dict
- # Assign _declared_fields on class
- klass._declared_fields = mcs.get_declared_fields(
- klass=klass,
- cls_fields=cls_fields,
- inherited_fields=inherited_fields,
- dict_cls=dict_cls,
- )
- return klass
-
- @classmethod
- def get_declared_fields(mcs, klass, cls_fields, inherited_fields, dict_cls):
- """Returns a dictionary of field_name => `Field` pairs declard on the class.
- This is exposed mainly so that plugins can add additional fields, e.g. fields
- computed from class Meta options.
-
- :param type klass: The class object.
- :param list cls_fields: The fields declared on the class, including those added
- by the ``include`` class Meta option.
- :param list inherited_fields: Inherited fields.
- :param type dict_class: Either `dict` or `OrderedDict`, depending on the whether
- the user specified `ordered=True`.
- """
- return dict_cls(inherited_fields + cls_fields)
-
- def __init__(cls, name, bases, attrs):
- super().__init__(cls, bases, attrs)
- if name and cls.opts.register:
- class_registry.register(name, cls)
- cls._hooks = cls.resolve_hooks()
-
- def resolve_hooks(cls):
- """Add in the decorated processors
-
- By doing this after constructing the class, we let standard inheritance
- do all the hard work.
- """
- mro = inspect.getmro(cls)
-
- hooks = defaultdict(list)
-
- for attr_name in dir(cls):
- # Need to look up the actual descriptor, not whatever might be
- # bound to the class. This needs to come from the __dict__ of the
- # declaring class.
- for parent in mro:
- try:
- attr = parent.__dict__[attr_name]
- except KeyError:
- continue
- else:
- break
- else:
- # In case we didn't find the attribute and didn't break above.
- # We should never hit this - it's just here for completeness
- # to exclude the possibility of attr being undefined.
- continue
-
- try:
- hook_config = attr.__marshmallow_hook__
- except AttributeError:
- pass
- else:
- for key in hook_config.keys():
- # Use name here so we can get the bound method later, in
- # case the processor was a descriptor or something.
- hooks[key].append(attr_name)
-
- return hooks
-
-
-class SchemaOpts:
- """class Meta options for the :class:`Schema`. Defines defaults."""
-
- def __init__(self, meta, ordered=False):
- self.fields = getattr(meta, "fields", ())
- if not isinstance(self.fields, (list, tuple)):
- raise ValueError("`fields` option must be a list or tuple.")
- self.additional = getattr(meta, "additional", ())
- if not isinstance(self.additional, (list, tuple)):
- raise ValueError("`additional` option must be a list or tuple.")
- if self.fields and self.additional:
- raise ValueError(
- "Cannot set both `fields` and `additional` options"
- " for the same Schema."
- )
- self.exclude = getattr(meta, "exclude", ())
- if not isinstance(self.exclude, (list, tuple)):
- raise ValueError("`exclude` must be a list or tuple.")
- self.dateformat = getattr(meta, "dateformat", None)
- self.datetimeformat = getattr(meta, "datetimeformat", None)
- if hasattr(meta, "json_module"):
- warnings.warn(
- "The json_module class Meta option is deprecated. Use render_module instead.",
- DeprecationWarning,
- )
- render_module = getattr(meta, "json_module", json)
- else:
- render_module = json
- self.render_module = getattr(meta, "render_module", render_module)
- self.ordered = getattr(meta, "ordered", ordered)
- self.index_errors = getattr(meta, "index_errors", True)
- self.include = getattr(meta, "include", {})
- self.load_only = getattr(meta, "load_only", ())
- self.dump_only = getattr(meta, "dump_only", ())
- self.unknown = getattr(meta, "unknown", RAISE)
- self.register = getattr(meta, "register", True)
-
-
-class BaseSchema(base.SchemaABC):
- """Base schema class with which to define custom schemas.
-
- Example usage:
-
- .. code-block:: python
-
- import datetime as dt
- from marshmallow import Schema, fields
-
- class Album:
- def __init__(self, title, release_date):
- self.title = title
- self.release_date = release_date
-
- class AlbumSchema(Schema):
- title = fields.Str()
- release_date = fields.Date()
-
- # Or, equivalently
- class AlbumSchema2(Schema):
- class Meta:
- fields = ("title", "release_date")
-
- album = Album("Beggars Banquet", dt.date(1968, 12, 6))
- schema = AlbumSchema()
- data = schema.dump(album)
- data # {'release_date': '1968-12-06', 'title': 'Beggars Banquet'}
-
- :param tuple|list only: Whitelist of the declared fields to select when
- instantiating the Schema. If None, all fields are used. Nested fields
- can be represented with dot delimiters.
- :param tuple|list exclude: Blacklist of the declared fields to exclude
- when instantiating the Schema. If a field appears in both `only` and
- `exclude`, it is not used. Nested fields can be represented with dot
- delimiters.
- :param bool many: Should be set to `True` if ``obj`` is a collection
- so that the object will be serialized to a list.
- :param dict context: Optional context passed to :class:`fields.Method` and
- :class:`fields.Function` fields.
- :param tuple|list load_only: Fields to skip during serialization (write-only fields)
- :param tuple|list dump_only: Fields to skip during deserialization (read-only fields)
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
-
- .. versionchanged:: 3.0.0
- `prefix` parameter removed.
-
- .. versionchanged:: 2.0.0
- `__validators__`, `__preprocessors__`, and `__data_handlers__` are removed in favor of
- `marshmallow.decorators.validates_schema`,
- `marshmallow.decorators.pre_load` and `marshmallow.decorators.post_dump`.
- `__accessor__` and `__error_handler__` are deprecated. Implement the
- `handle_error` and `get_attribute` methods instead.
- """
-
- TYPE_MAPPING = {
- str: ma_fields.String,
- bytes: ma_fields.String,
- dt.datetime: ma_fields.DateTime,
- float: ma_fields.Float,
- bool: ma_fields.Boolean,
- tuple: ma_fields.Raw,
- list: ma_fields.Raw,
- set: ma_fields.Raw,
- int: ma_fields.Integer,
- uuid.UUID: ma_fields.UUID,
- dt.time: ma_fields.Time,
- dt.date: ma_fields.Date,
- dt.timedelta: ma_fields.TimeDelta,
- decimal.Decimal: ma_fields.Decimal,
- }
- #: Overrides for default schema-level error messages
- error_messages = {}
-
- _default_error_messages = {
- "type": "Invalid input type.",
- "unknown": "Unknown field.",
- }
-
- OPTIONS_CLASS = SchemaOpts
-
- class Meta:
- """Options object for a Schema.
-
- Example usage: ::
-
- class Meta:
- fields = ("id", "email", "date_created")
- exclude = ("password", "secret_attribute")
-
- Available options:
-
- - ``fields``: Tuple or list of fields to include in the serialized result.
- - ``additional``: Tuple or list of fields to include *in addition* to the
- explicitly declared fields. ``additional`` and ``fields`` are
- mutually-exclusive options.
- - ``include``: Dictionary of additional fields to include in the schema. It is
- usually better to define fields as class variables, but you may need to
- use this option, e.g., if your fields are Python keywords. May be an
- `OrderedDict`.
- - ``exclude``: Tuple or list of fields to exclude in the serialized result.
- Nested fields can be represented with dot delimiters.
- - ``dateformat``: Default format for `Date <fields.Date>` fields.
- - ``datetimeformat``: Default format for `DateTime <fields.DateTime>` fields.
- - ``render_module``: Module to use for `loads <Schema.loads>` and `dumps <Schema.dumps>`.
- Defaults to `json` from the standard library.
- - ``ordered``: If `True`, order serialization output according to the
- order in which fields were declared. Output of `Schema.dump` will be a
- `collections.OrderedDict`.
- - ``index_errors``: If `True`, errors dictionaries will include the index
- of invalid items in a collection.
- - ``load_only``: Tuple or list of fields to exclude from serialized results.
- - ``dump_only``: Tuple or list of fields to exclude from deserialization
- - ``unknown``: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- - ``register``: Whether to register the `Schema` with marshmallow's internal
- class registry. Must be `True` if you intend to refer to this `Schema`
- by class name in `Nested` fields. Only set this to `False` when memory
- usage is critical. Defaults to `True`.
- """
-
- pass
-
- def __init__(
- self,
- *,
- only=None,
- exclude=(),
- many=False,
- context=None,
- load_only=(),
- dump_only=(),
- partial=False,
- unknown=None
- ):
- # Raise error if only or exclude is passed as string, not list of strings
- if only is not None and not is_collection(only):
- raise StringNotCollectionError('"only" should be a list of strings')
- if exclude is not None and not is_collection(exclude):
- raise StringNotCollectionError('"exclude" should be a list of strings')
- # copy declared fields from metaclass
- self.declared_fields = copy.deepcopy(self._declared_fields)
- self.many = many
- self.only = only
- self.exclude = set(self.opts.exclude) | set(exclude)
- self.ordered = self.opts.ordered
- self.load_only = set(load_only) or set(self.opts.load_only)
- self.dump_only = set(dump_only) or set(self.opts.dump_only)
- self.partial = partial
- self.unknown = unknown or self.opts.unknown
- self.context = context or {}
- self._normalize_nested_options()
- #: Dictionary mapping field_names -> :class:`Field` objects
- self.fields = self._init_fields()
- self.dump_fields, self.load_fields = self.dict_class(), self.dict_class()
- for field_name, field_obj in self.fields.items():
- if field_obj.load_only:
- self.load_fields[field_name] = field_obj
- elif field_obj.dump_only:
- self.dump_fields[field_name] = field_obj
- else:
- self.load_fields[field_name] = field_obj
- self.dump_fields[field_name] = field_obj
- messages = {}
- messages.update(self._default_error_messages)
- for cls in reversed(self.__class__.__mro__):
- messages.update(getattr(cls, "error_messages", {}))
- messages.update(self.error_messages or {})
- self.error_messages = messages
-
- def __repr__(self):
- return "<{ClassName}(many={self.many})>".format(
- ClassName=self.__class__.__name__, self=self
- )
-
- @property
- def dict_class(self):
- return OrderedDict if self.ordered else dict
-
- @property
- def set_class(self):
- return OrderedSet if self.ordered else set
-
- @classmethod
- def from_dict(
- cls, fields: typing.Dict[str, ma_fields.Field], *, name: str = "GeneratedSchema"
- ) -> typing.Type["Schema"]:
- """Generate a `Schema` class given a dictionary of fields.
-
- .. code-block:: python
-
- from marshmallow import Schema, fields
-
- PersonSchema = Schema.from_dict({"name": fields.Str()})
- print(PersonSchema().load({"name": "David"})) # => {'name': 'David'}
-
- Generated schemas are not added to the class registry and therefore cannot
- be referred to by name in `Nested` fields.
-
- :param dict fields: Dictionary mapping field names to field instances.
- :param str name: Optional name for the class, which will appear in
- the ``repr`` for the class.
-
- .. versionadded:: 3.0.0
- """
- attrs = fields.copy()
- attrs["Meta"] = type(
- "GeneratedMeta", (getattr(cls, "Meta", object),), {"register": False}
- )
- schema_cls = type(name, (cls,), attrs)
- return schema_cls
-
- ##### Override-able methods #####
-
- def handle_error(self, error, data, *, many, **kwargs):
- """Custom error handler function for the schema.
-
- :param ValidationError error: The `ValidationError` raised during (de)serialization.
- :param data: The original input data.
- :param bool many: Value of ``many`` on dump or load.
- :param bool partial: Value of ``partial`` on load.
-
- .. versionadded:: 2.0.0
-
- .. versionchanged:: 3.0.0rc9
- Receives `many` and `partial` (on deserialization) as keyword arguments.
- """
- pass
-
- def get_attribute(self, obj, attr, default):
- """Defines how to pull values from an object to serialize.
-
- .. versionadded:: 2.0.0
-
- .. versionchanged:: 3.0.0a1
- Changed position of ``obj`` and ``attr``.
- """
- return get_value(obj, attr, default)
-
- ##### Serialization/Deserialization API #####
-
- @staticmethod
- def _call_and_store(getter_func, data, *, field_name, error_store, index=None):
- """Call ``getter_func`` with ``data`` as its argument, and store any `ValidationErrors`.
-
- :param callable getter_func: Function for getting the serialized/deserialized
- value from ``data``.
- :param data: The data passed to ``getter_func``.
- :param str field_name: Field name.
- :param int index: Index of the item being validated, if validating a collection,
- otherwise `None`.
- """
- try:
- value = getter_func(data)
- except ValidationError as error:
- error_store.store_error(error.messages, field_name, index=index)
- # When a Nested field fails validation, the marshalled data is stored
- # on the ValidationError's valid_data attribute
- return error.valid_data or missing
- return value
-
- def _serialize(self, obj, *, many=False):
- """Serialize ``obj``.
-
- :param obj: The object(s) to serialize.
- :param bool many: `True` if ``data`` should be serialized as a collection.
- :return: A dictionary of the serialized data
-
- .. versionchanged:: 1.0.0
- Renamed from ``marshal``.
- """
- if many and obj is not None:
- return [self._serialize(d, many=False) for d in obj]
- ret = self.dict_class()
- for attr_name, field_obj in self.dump_fields.items():
- value = field_obj.serialize(attr_name, obj, accessor=self.get_attribute)
- if value is missing:
- continue
- key = field_obj.data_key or attr_name
- ret[key] = value
- return ret
-
- def dump(self, obj, *, many=None):
- """Serialize an object to native Python data types according to this
- Schema's fields.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :return: A dict of serialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the serialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if ``obj`` is invalid.
- .. versionchanged:: 3.0.0rc9
- Validation no longer occurs upon serialization.
- """
- many = self.many if many is None else bool(many)
- if many and is_iterable_but_not_string(obj):
- obj = list(obj)
-
- if self._has_processors(PRE_DUMP):
- processed_obj = self._invoke_dump_processors(
- PRE_DUMP, obj, many=many, original_data=obj
- )
- else:
- processed_obj = obj
-
- result = self._serialize(processed_obj, many=many)
-
- if self._has_processors(POST_DUMP):
- result = self._invoke_dump_processors(
- POST_DUMP, result, many=many, original_data=obj
- )
-
- return result
-
- def dumps(self, obj, *args, many=None, **kwargs):
- """Same as :meth:`dump`, except return a JSON-encoded string.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :return: A ``json`` string
- :rtype: str
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the serialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if ``obj`` is invalid.
- """
- serialized = self.dump(obj, many=many)
- return self.opts.render_module.dumps(serialized, *args, **kwargs)
-
- def _deserialize(
- self, data, *, error_store, many=False, partial=False, unknown=RAISE, index=None
- ):
- """Deserialize ``data``.
-
- :param dict data: The data to deserialize.
- :param ErrorStore error_store: Structure to store errors.
- :param bool many: `True` if ``data`` should be deserialized as a collection.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- :param int index: Index of the item being serialized (for storing errors) if
- serializing a collection, otherwise `None`.
- :return: A dictionary of the deserialized data.
- """
- index_errors = self.opts.index_errors
- index = index if index_errors else None
- if many:
- if not is_collection(data):
- error_store.store_error([self.error_messages["type"]], index=index)
- ret = []
- else:
- ret = [
- self._deserialize(
- d,
- error_store=error_store,
- many=False,
- partial=partial,
- unknown=unknown,
- index=idx,
- )
- for idx, d in enumerate(data)
- ]
- return ret
- ret = self.dict_class()
- # Check data is a dict
- if not isinstance(data, Mapping):
- error_store.store_error([self.error_messages["type"]], index=index)
- else:
- partial_is_collection = is_collection(partial)
- for attr_name, field_obj in self.load_fields.items():
- field_name = field_obj.data_key or attr_name
- raw_value = data.get(field_name, missing)
- if raw_value is missing:
- # Ignore missing field if we're allowed to.
- if partial is True or (
- partial_is_collection and attr_name in partial
- ):
- continue
- d_kwargs = {}
- # Allow partial loading of nested schemas.
- if partial_is_collection:
- prefix = field_name + "."
- len_prefix = len(prefix)
- sub_partial = [
- f[len_prefix:] for f in partial if f.startswith(prefix)
- ]
- d_kwargs["partial"] = sub_partial
- else:
- d_kwargs["partial"] = partial
- getter = lambda val: field_obj.deserialize(
- val, field_name, data, **d_kwargs
- )
- value = self._call_and_store(
- getter_func=getter,
- data=raw_value,
- field_name=field_name,
- error_store=error_store,
- index=index,
- )
- if value is not missing:
- key = field_obj.attribute or attr_name
- set_value(ret, key, value)
- if unknown != EXCLUDE:
- fields = {
- field_obj.data_key or field_name
- for field_name, field_obj in self.load_fields.items()
- }
- for key in set(data) - fields:
- value = data[key]
- if unknown == INCLUDE:
- set_value(ret, key, value)
- elif unknown == RAISE:
- error_store.store_error(
- [self.error_messages["unknown"]],
- key,
- (index if index_errors else None),
- )
- return ret
-
- def load(self, data, *, many=None, partial=None, unknown=None):
- """Deserialize a data structure to an object defined by this Schema's fields.
-
- :param dict data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :return: A dict of deserialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the deserialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if invalid data are passed.
- """
- return self._do_load(
- data, many=many, partial=partial, unknown=unknown, postprocess=True
- )
-
- def loads(self, json_data, *, many=None, partial=None, unknown=None, **kwargs):
- """Same as :meth:`load`, except it takes a JSON string as input.
-
- :param str json_data: A JSON string of the data to deserialize.
- :param bool many: Whether to deserialize `obj` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :return: A dict of deserialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the deserialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if invalid data are passed.
- """
- data = self.opts.render_module.loads(json_data, **kwargs)
- return self.load(data, many=many, partial=partial, unknown=unknown)
-
- def _run_validator(
- self,
- validator_func,
- output,
- *,
- original_data,
- error_store,
- many,
- partial,
- pass_original,
- index=None
- ):
- try:
- if pass_original: # Pass original, raw data (before unmarshalling)
- validator_func(output, original_data, partial=partial, many=many)
- else:
- validator_func(output, partial=partial, many=many)
- except ValidationError as err:
- error_store.store_error(err.messages, err.field_name, index=index)
-
- def validate(self, data, *, many=None, partial=None):
- """Validate `data` against the schema, returning a dictionary of
- validation errors.
-
- :param dict data: The data to validate.
- :param bool many: Whether to validate `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :return: A dictionary of validation errors.
- :rtype: dict
-
- .. versionadded:: 1.1.0
- """
- try:
- self._do_load(data, many=many, partial=partial, postprocess=False)
- except ValidationError as exc:
- return exc.messages
- return {}
-
- ##### Private Helpers #####
-
- def _do_load(
- self, data, *, many=None, partial=None, unknown=None, postprocess=True
- ):
- """Deserialize `data`, returning the deserialized result.
-
- :param data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to validate required fields. If its
- value is an iterable, only fields listed in that iterable will be
- ignored will be allowed missing. If `True`, all fields will be allowed missing.
- If `None`, the value for `self.partial` is used.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :param bool postprocess: Whether to run post_load methods..
- :return: A dict of deserialized data
- :rtype: dict
- """
- error_store = ErrorStore()
- errors = {}
- many = self.many if many is None else bool(many)
- unknown = unknown or self.unknown
- if partial is None:
- partial = self.partial
- # Run preprocessors
- if self._has_processors(PRE_LOAD):
- try:
- processed_data = self._invoke_load_processors(
- PRE_LOAD, data, many=many, original_data=data, partial=partial
- )
- except ValidationError as err:
- errors = err.normalized_messages()
- result = None
- else:
- processed_data = data
- if not errors:
- # Deserialize data
- result = self._deserialize(
- processed_data,
- error_store=error_store,
- many=many,
- partial=partial,
- unknown=unknown,
- )
- # Run field-level validation
- self._invoke_field_validators(
- error_store=error_store, data=result, many=many
- )
- # Run schema-level validation
- if self._has_processors(VALIDATES_SCHEMA):
- field_errors = bool(error_store.errors)
- self._invoke_schema_validators(
- error_store=error_store,
- pass_many=True,
- data=result,
- original_data=data,
- many=many,
- partial=partial,
- field_errors=field_errors,
- )
- self._invoke_schema_validators(
- error_store=error_store,
- pass_many=False,
- data=result,
- original_data=data,
- many=many,
- partial=partial,
- field_errors=field_errors,
- )
- errors = error_store.errors
- # Run post processors
- if not errors and postprocess and self._has_processors(POST_LOAD):
- try:
- result = self._invoke_load_processors(
- POST_LOAD,
- result,
- many=many,
- original_data=data,
- partial=partial,
- )
- except ValidationError as err:
- errors = err.normalized_messages()
- if errors:
- exc = ValidationError(errors, data=data, valid_data=result)
- self.handle_error(exc, data, many=many, partial=partial)
- raise exc
-
- return result
-
- def _normalize_nested_options(self):
- """Apply then flatten nested schema options"""
- if self.only is not None:
- # Apply the only option to nested fields.
- self.__apply_nested_option("only", self.only, "intersection")
- # Remove the child field names from the only option.
- self.only = self.set_class([field.split(".", 1)[0] for field in self.only])
- if self.exclude:
- # Apply the exclude option to nested fields.
- self.__apply_nested_option("exclude", self.exclude, "union")
- # Remove the parent field names from the exclude option.
- self.exclude = self.set_class(
- [field for field in self.exclude if "." not in field]
- )
-
- def __apply_nested_option(self, option_name, field_names, set_operation):
- """Apply nested options to nested fields"""
- # Split nested field names on the first dot.
- nested_fields = [name.split(".", 1) for name in field_names if "." in name]
- # Partition the nested field names by parent field.
- nested_options = defaultdict(list)
- for parent, nested_names in nested_fields:
- nested_options[parent].append(nested_names)
- # Apply the nested field options.
- for key, options in iter(nested_options.items()):
- new_options = self.set_class(options)
- original_options = getattr(self.declared_fields[key], option_name, ())
- if original_options:
- if set_operation == "union":
- new_options |= self.set_class(original_options)
- if set_operation == "intersection":
- new_options &= self.set_class(original_options)
- setattr(self.declared_fields[key], option_name, new_options)
-
- def _init_fields(self):
- """Update fields based on schema options."""
- if self.opts.fields:
- available_field_names = self.set_class(self.opts.fields)
- else:
- available_field_names = self.set_class(self.declared_fields.keys())
- if self.opts.additional:
- available_field_names |= self.set_class(self.opts.additional)
-
- invalid_fields = self.set_class()
-
- if self.only is not None:
- # Return only fields specified in only option
- field_names = self.set_class(self.only)
-
- invalid_fields |= field_names - available_field_names
- else:
- field_names = available_field_names
-
- # If "exclude" option or param is specified, remove those fields.
- if self.exclude:
- # Note that this isn't available_field_names, since we want to
- # apply "only" for the actual calculation.
- field_names = field_names - self.exclude
- invalid_fields |= self.exclude - available_field_names
-
- if invalid_fields:
- message = "Invalid fields for {}: {}.".format(self, invalid_fields)
- raise ValueError(message)
-
- fields_dict = self.dict_class()
- for field_name in field_names:
- field_obj = self.declared_fields.get(field_name, ma_fields.Inferred())
- self._bind_field(field_name, field_obj)
- fields_dict[field_name] = field_obj
-
- dump_data_keys = [
- obj.data_key or name
- for name, obj in fields_dict.items()
- if not obj.load_only
- ]
- if len(dump_data_keys) != len(set(dump_data_keys)):
- data_keys_duplicates = {
- x for x in dump_data_keys if dump_data_keys.count(x) > 1
- }
- raise ValueError(
- "The data_key argument for one or more fields collides "
- "with another field's name or data_key argument. "
- "Check the following field names and "
- "data_key arguments: {}".format(list(data_keys_duplicates))
- )
-
- load_attributes = [
- obj.attribute or name
- for name, obj in fields_dict.items()
- if not obj.dump_only
- ]
- if len(load_attributes) != len(set(load_attributes)):
- attributes_duplicates = {
- x for x in load_attributes if load_attributes.count(x) > 1
- }
- raise ValueError(
- "The attribute argument for one or more fields collides "
- "with another field's name or attribute argument. "
- "Check the following field names and "
- "attribute arguments: {}".format(list(attributes_duplicates))
- )
-
- return fields_dict
-
- def on_bind_field(self, field_name, field_obj):
- """Hook to modify a field when it is bound to the `Schema`.
-
- No-op by default.
- """
- return None
-
- def _bind_field(self, field_name, field_obj):
- """Bind field to the schema, setting any necessary attributes on the
- field (e.g. parent and name).
-
- Also set field load_only and dump_only values if field_name was
- specified in ``class Meta``.
- """
- try:
- if field_name in self.load_only:
- field_obj.load_only = True
- if field_name in self.dump_only:
- field_obj.dump_only = True
- field_obj._bind_to_schema(field_name, self)
- self.on_bind_field(field_name, field_obj)
- except TypeError as error:
- # field declared as a class, not an instance
- if isinstance(field_obj, type) and issubclass(field_obj, base.FieldABC):
- msg = (
- 'Field for "{}" must be declared as a '
- "Field instance, not a class. "
- 'Did you mean "fields.{}()"?'.format(field_name, field_obj.__name__)
- )
- raise TypeError(msg) from error
-
- @lru_cache(maxsize=8)
- def _has_processors(self, tag):
- return self._hooks[(tag, True)] or self._hooks[(tag, False)]
-
- def _invoke_dump_processors(self, tag, data, *, many, original_data=None):
- # The pass_many post-dump processors may do things like add an envelope, so
- # invoke those after invoking the non-pass_many processors which will expect
- # to get a list of items.
- data = self._invoke_processors(
- tag, pass_many=False, data=data, many=many, original_data=original_data
- )
- data = self._invoke_processors(
- tag, pass_many=True, data=data, many=many, original_data=original_data
- )
- return data
-
- def _invoke_load_processors(self, tag, data, *, many, original_data, partial):
- # This has to invert the order of the dump processors, so run the pass_many
- # processors first.
- data = self._invoke_processors(
- tag,
- pass_many=True,
- data=data,
- many=many,
- original_data=original_data,
- partial=partial,
- )
- data = self._invoke_processors(
- tag,
- pass_many=False,
- data=data,
- many=many,
- original_data=original_data,
- partial=partial,
- )
- return data
-
- def _invoke_field_validators(self, *, error_store, data, many):
- for attr_name in self._hooks[VALIDATES]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_hook__[VALIDATES]
- field_name = validator_kwargs["field_name"]
-
- try:
- field_obj = self.fields[field_name]
- except KeyError as error:
- if field_name in self.declared_fields:
- continue
- raise ValueError(
- '"{}" field does not exist.'.format(field_name)
- ) from error
-
- if many:
- for idx, item in enumerate(data):
- try:
- value = item[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = self._call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.data_key or field_name,
- error_store=error_store,
- index=(idx if self.opts.index_errors else None),
- )
- if validated_value is missing:
- data[idx].pop(field_name, None)
- else:
- try:
- value = data[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = self._call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.data_key or field_name,
- error_store=error_store,
- )
- if validated_value is missing:
- data.pop(field_name, None)
-
- def _invoke_schema_validators(
- self,
- *,
- error_store,
- pass_many,
- data,
- original_data,
- many,
- partial,
- field_errors=False
- ):
- for attr_name in self._hooks[(VALIDATES_SCHEMA, pass_many)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_hook__[
- (VALIDATES_SCHEMA, pass_many)
- ]
- if field_errors and validator_kwargs["skip_on_field_errors"]:
- continue
- pass_original = validator_kwargs.get("pass_original", False)
-
- if many and not pass_many:
- for idx, (item, orig) in enumerate(zip(data, original_data)):
- self._run_validator(
- validator,
- item,
- original_data=orig,
- error_store=error_store,
- many=many,
- partial=partial,
- index=idx,
- pass_original=pass_original,
- )
- else:
- self._run_validator(
- validator,
- data,
- original_data=original_data,
- error_store=error_store,
- many=many,
- pass_original=pass_original,
- partial=partial,
- )
-
- def _invoke_processors(
- self, tag, *, pass_many, data, many, original_data=None, **kwargs
- ):
- key = (tag, pass_many)
- for attr_name in self._hooks[key]:
- # This will be a bound method.
- processor = getattr(self, attr_name)
-
- processor_kwargs = processor.__marshmallow_hook__[key]
- pass_original = processor_kwargs.get("pass_original", False)
-
- if pass_many:
- if pass_original:
- data = processor(data, original_data, many=many, **kwargs)
- else:
- data = processor(data, many=many, **kwargs)
- elif many:
- if pass_original:
- data = [
- processor(item, original, many=many, **kwargs)
- for item, original in zip(data, original_data)
- ]
- else:
- data = [processor(item, many=many, **kwargs) for item in data]
- else:
- if pass_original:
- data = processor(data, original_data, many=many, **kwargs)
- else:
- data = processor(data, many=many, **kwargs)
- return data
-
-
-class Schema(BaseSchema, metaclass=SchemaMeta):
- __doc__ = BaseSchema.__doc__
+or getattr(self.root.opts, self.SCHEMA_OPTS_VAR_NAME)
\ No newline at end of fileScoring
Passing target tests
No fail-to-pass successes recorded yet.
Failing target tests
tests/test_fields.py::TestParentAndName::test_datetime_list_inner_format
Maintained regression tests
No pass-to-pass successes recorded yet.
Regressed tests
tests/test_fields.py::test_field_aliases[Integer-Integer] tests/test_fields.py::test_field_aliases[String-String] tests/test_fields.py::test_field_aliases[Boolean-Boolean] tests/test_fields.py::test_field_aliases[Url-Url] tests/test_fields.py::TestField::test_repr tests/test_fields.py::TestField::test_error_raised_if_uncallable_validator_passed tests/test_fields.py::TestField::test_error_raised_if_missing_is_set_on_required_field tests/test_fields.py::TestField::test_custom_field_receives_attr_and_obj tests/test_fields.py::TestField::test_custom_field_receives_data_key_if_set tests/test_fields.py::TestField::test_custom_field_follows_data_key_if_set tests/test_fields.py::TestParentAndName::test_simple_field_parent_and_name tests/test_fields.py::TestParentAndName::test_unbound_field_root_returns_none tests/test_fields.py::TestParentAndName::test_list_field_inner_parent_and_name tests/test_fields.py::TestParentAndName::test_tuple_field_inner_parent_and_name tests/test_fields.py::TestParentAndName::test_mapping_field_inner_parent_and_name tests/test_fields.py::TestParentAndName::test_simple_field_root tests/test_fields.py::TestParentAndName::test_list_field_inner_root tests/test_fields.py::TestParentAndName::test_tuple_field_inner_root tests/test_fields.py::TestParentAndName::test_list_root_inheritance tests/test_fields.py::TestParentAndName::test_dict_root_inheritance tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[String] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Integer] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Boolean] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Float] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Number] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[DateTime] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Time] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Date] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[TimeDelta] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Dict] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Url] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Email] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[UUID] tests/test_fields.py::TestMetadata::test_extra_metadata_may_be_added_to_field[Decimal] tests/test_fields.py::TestErrorMessages::test_default_error_messages_get_merged_with_parent_error_messages_cstm_msg tests/test_fields.py::TestErrorMessages::test_default_error_messages_get_merged_with_parent_error_messages tests/test_fields.py::TestErrorMessages::test_make_error[required-Missing tests/test_fields.py::TestErrorMessages::test_make_error[null-Field tests/test_fields.py::TestErrorMessages::test_make_error[custom-Custom tests/test_fields.py::TestErrorMessages::test_make_error[validator_failed-Invalid tests/test_fields.py::TestErrorMessages::test_fail[required-Missing tests/test_fields.py::TestErrorMessages::test_fail[null-Field tests/test_fields.py::TestErrorMessages::test_fail[custom-Custom tests/test_fields.py::TestErrorMessages::test_fail[validator_failed-Invalid tests/test_fields.py::TestErrorMessages::test_make_error_key_doesnt_exist tests/test_fields.py::TestNestedField::test_nested_only_and_exclude_as_string[only] tests/test_fields.py::TestNestedField::test_nested_only_and_exclude_as_string[exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[None-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[None-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[None-raise] tests/test_fields.py::TestNestedField::test_nested_unknown_override[exclude-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[exclude-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[exclude-raise] tests/test_fields.py::TestNestedField::test_nested_unknown_override[include-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[include-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[include-raise] tests/test_fields.py::TestNestedField::test_nested_unknown_override[raise-exclude] tests/test_fields.py::TestNestedField::test_nested_unknown_override[raise-include] tests/test_fields.py::TestNestedField::test_nested_unknown_override[raise-raise] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[only] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[exclude] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[dump_only] tests/test_fields.py::TestListNested::test_list_nested_only_exclude_dump_only_load_only_propagated_to_nested[load_only] tests/test_fields.py::TestListNested::test_list_nested_only_and_exclude_merged_with_nested[only-expected0] tests/test_fields.py::TestListNested::test_list_nested_only_and_exclude_merged_with_nested[exclude-expected1] tests/test_fields.py::TestListNested::test_list_nested_partial_propagated_to_nested tests/test_fields.py::TestTupleNested::test_tuple_nested_only_exclude_dump_only_load_only_propagated_to_nested[dump_only] tests/test_fields.py::TestTupleNested::test_tuple_nested_only_exclude_dump_only_load_only_propagated_to_nested[load_only] tests/test_fields.py::TestTupleNested::test_tuple_nested_partial_propagated_to_nested tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[only] tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[exclude] tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[dump_only] tests/test_fields.py::TestDictNested::test_dict_nested_only_exclude_dump_only_load_only_propagated_to_nested[load_only] tests/test_fields.py::TestDictNested::test_dict_nested_only_and_exclude_merged_with_nested[only-expected0] tests/test_fields.py::TestDictNested::test_dict_nested_only_and_exclude_merged_with_nested[exclude-expected1] tests/test_fields.py::TestDictNested::test_dict_nested_partial_propagated_to_nested
Harness output
+ source /opt/miniconda3/bin/activate
++ _CONDA_ROOT=/opt/miniconda3
++ . /opt/miniconda3/etc/profile.d/conda.sh
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ '[' -z x ']'
++ conda activate
++ local cmd=activate
++ case "$cmd" in
++ __conda_activate activate
++ '[' -n '' ']'
++ local ask_conda
+++ PS1=
+++ __conda_exe shell.posix activate
+++ /opt/miniconda3/bin/conda shell.posix activate
++ ask_conda='PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''2'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_1='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ eval 'PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''2'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_1='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+++ PS1='(base) '
+++ export PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ export CONDA_PREFIX=/opt/miniconda3
+++ CONDA_PREFIX=/opt/miniconda3
+++ export CONDA_SHLVL=2
+++ CONDA_SHLVL=2
+++ export CONDA_DEFAULT_ENV=base
+++ CONDA_DEFAULT_ENV=base
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
+++ CONDA_PROMPT_MODIFIER='(base) '
+++ export CONDA_PREFIX_1=/opt/miniconda3/envs/testbed
+++ CONDA_PREFIX_1=/opt/miniconda3/envs/testbed
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ __conda_hashr
++ '[' -n '' ']'
++ '[' -n '' ']'
++ hash -r
+ conda activate testbed
+ local cmd=activate
+ case "$cmd" in
+ __conda_activate activate testbed
+ '[' -n '' ']'
+ local ask_conda
++ PS1='(base) '
++ __conda_exe shell.posix activate testbed
++ /opt/miniconda3/bin/conda shell.posix activate testbed
+ ask_conda='PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''3'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_2='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+ eval 'PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''3'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_2='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ PS1='(testbed) '
++ export PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ export CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ export CONDA_SHLVL=3
++ CONDA_SHLVL=3
++ export CONDA_DEFAULT_ENV=testbed
++ CONDA_DEFAULT_ENV=testbed
++ export 'CONDA_PROMPT_MODIFIER=(testbed) '
++ CONDA_PROMPT_MODIFIER='(testbed) '
++ export CONDA_PREFIX_2=/opt/miniconda3
++ CONDA_PREFIX_2=/opt/miniconda3
++ export CONDA_EXE=/opt/miniconda3/bin/conda
++ CONDA_EXE=/opt/miniconda3/bin/conda
++ export _CE_M=
++ _CE_M=
++ export _CE_CONDA=
++ _CE_CONDA=
++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+ __conda_hashr
+ '[' -n '' ']'
+ '[' -n '' ']'
+ hash -r
+ cd /testbed
+ git config --global --add safe.directory /testbed
+ cd /testbed
+ git status
On branch dev
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: src/marshmallow/schema.py
no changes added to commit (use "git add" and/or "git commit -a")
+ git show
commit 8ad28316545371a1d1d7c2ad0b32d5f8ab206853
Author: SWE-bench <setup@swebench.config>
Date: Tue May 6 23:07:21 2025 +0000
SWE-bench
+ git -c core.fileMode=false diff b40a0f4e33823e6d0f341f7e8684e359a99060d1
diff --git a/src/marshmallow/schema.py b/src/marshmallow/schema.py
index 520ffb44..a683c166 100644
--- a/src/marshmallow/schema.py
+++ b/src/marshmallow/schema.py
@@ -1,1140 +1 @@
-"""The :class:`Schema` class, including its metaclass and options (class Meta)."""
-from collections import defaultdict, OrderedDict
-from collections.abc import Mapping
-from functools import lru_cache
-import datetime as dt
-import uuid
-import decimal
-import copy
-import inspect
-import json
-import typing
-import warnings
-
-from marshmallow import base, fields as ma_fields, class_registry
-from marshmallow.error_store import ErrorStore
-from marshmallow.exceptions import ValidationError, StringNotCollectionError
-from marshmallow.orderedset import OrderedSet
-from marshmallow.decorators import (
- POST_DUMP,
- POST_LOAD,
- PRE_DUMP,
- PRE_LOAD,
- VALIDATES,
- VALIDATES_SCHEMA,
-)
-from marshmallow.utils import (
- RAISE,
- EXCLUDE,
- INCLUDE,
- missing,
- set_value,
- get_value,
- is_collection,
- is_instance_or_subclass,
- is_iterable_but_not_string,
-)
-
-
-def _get_fields(attrs, field_class, pop=False, ordered=False):
- """Get fields from a class. If ordered=True, fields will sorted by creation index.
-
- :param attrs: Mapping of class attributes
- :param type field_class: Base field class
- :param bool pop: Remove matching fields
- """
- fields = [
- (field_name, field_value)
- for field_name, field_value in attrs.items()
- if is_instance_or_subclass(field_value, field_class)
- ]
- if pop:
- for field_name, _ in fields:
- del attrs[field_name]
- if ordered:
- fields.sort(key=lambda pair: pair[1]._creation_index)
- return fields
-
-
-# This function allows Schemas to inherit from non-Schema classes and ensures
-# inheritance according to the MRO
-def _get_fields_by_mro(klass, field_class, ordered=False):
- """Collect fields from a class, following its method resolution order. The
- class itself is excluded from the search; only its parents are checked. Get
- fields from ``_declared_fields`` if available, else use ``__dict__``.
-
- :param type klass: Class whose fields to retrieve
- :param type field_class: Base field class
- """
- mro = inspect.getmro(klass)
- # Loop over mro in reverse to maintain correct order of fields
- return sum(
- (
- _get_fields(
- getattr(base, "_declared_fields", base.__dict__),
- field_class,
- ordered=ordered,
- )
- for base in mro[:0:-1]
- ),
- [],
- )
-
-
-class SchemaMeta(type):
- """Metaclass for the Schema class. Binds the declared fields to
- a ``_declared_fields`` attribute, which is a dictionary mapping attribute
- names to field objects. Also sets the ``opts`` class attribute, which is
- the Schema class's ``class Meta`` options.
- """
-
- def __new__(mcs, name, bases, attrs):
- meta = attrs.get("Meta")
- ordered = getattr(meta, "ordered", False)
- if not ordered:
- # Inherit 'ordered' option
- # Warning: We loop through bases instead of MRO because we don't
- # yet have access to the class object
- # (i.e. can't call super before we have fields)
- for base_ in bases:
- if hasattr(base_, "Meta") and hasattr(base_.Meta, "ordered"):
- ordered = base_.Meta.ordered
- break
- else:
- ordered = False
- cls_fields = _get_fields(attrs, base.FieldABC, pop=True, ordered=ordered)
- klass = super().__new__(mcs, name, bases, attrs)
- inherited_fields = _get_fields_by_mro(klass, base.FieldABC, ordered=ordered)
-
- meta = klass.Meta
- # Set klass.opts in __new__ rather than __init__ so that it is accessible in
- # get_declared_fields
- klass.opts = klass.OPTIONS_CLASS(meta, ordered=ordered)
- # Add fields specified in the `include` class Meta option
- cls_fields += list(klass.opts.include.items())
-
- dict_cls = OrderedDict if ordered else dict
- # Assign _declared_fields on class
- klass._declared_fields = mcs.get_declared_fields(
- klass=klass,
- cls_fields=cls_fields,
- inherited_fields=inherited_fields,
- dict_cls=dict_cls,
- )
- return klass
-
- @classmethod
- def get_declared_fields(mcs, klass, cls_fields, inherited_fields, dict_cls):
- """Returns a dictionary of field_name => `Field` pairs declard on the class.
- This is exposed mainly so that plugins can add additional fields, e.g. fields
- computed from class Meta options.
-
- :param type klass: The class object.
- :param list cls_fields: The fields declared on the class, including those added
- by the ``include`` class Meta option.
- :param list inherited_fields: Inherited fields.
- :param type dict_class: Either `dict` or `OrderedDict`, depending on the whether
- the user specified `ordered=True`.
- """
- return dict_cls(inherited_fields + cls_fields)
-
- def __init__(cls, name, bases, attrs):
- super().__init__(cls, bases, attrs)
- if name and cls.opts.register:
- class_registry.register(name, cls)
- cls._hooks = cls.resolve_hooks()
-
- def resolve_hooks(cls):
- """Add in the decorated processors
-
- By doing this after constructing the class, we let standard inheritance
- do all the hard work.
- """
- mro = inspect.getmro(cls)
-
- hooks = defaultdict(list)
-
- for attr_name in dir(cls):
- # Need to look up the actual descriptor, not whatever might be
- # bound to the class. This needs to come from the __dict__ of the
- # declaring class.
- for parent in mro:
- try:
- attr = parent.__dict__[attr_name]
- except KeyError:
- continue
- else:
- break
- else:
- # In case we didn't find the attribute and didn't break above.
- # We should never hit this - it's just here for completeness
- # to exclude the possibility of attr being undefined.
- continue
-
- try:
- hook_config = attr.__marshmallow_hook__
- except AttributeError:
- pass
- else:
- for key in hook_config.keys():
- # Use name here so we can get the bound method later, in
- # case the processor was a descriptor or something.
- hooks[key].append(attr_name)
-
- return hooks
-
-
-class SchemaOpts:
- """class Meta options for the :class:`Schema`. Defines defaults."""
-
- def __init__(self, meta, ordered=False):
- self.fields = getattr(meta, "fields", ())
- if not isinstance(self.fields, (list, tuple)):
- raise ValueError("`fields` option must be a list or tuple.")
- self.additional = getattr(meta, "additional", ())
- if not isinstance(self.additional, (list, tuple)):
- raise ValueError("`additional` option must be a list or tuple.")
- if self.fields and self.additional:
- raise ValueError(
- "Cannot set both `fields` and `additional` options"
- " for the same Schema."
- )
- self.exclude = getattr(meta, "exclude", ())
- if not isinstance(self.exclude, (list, tuple)):
- raise ValueError("`exclude` must be a list or tuple.")
- self.dateformat = getattr(meta, "dateformat", None)
- self.datetimeformat = getattr(meta, "datetimeformat", None)
- if hasattr(meta, "json_module"):
- warnings.warn(
- "The json_module class Meta option is deprecated. Use render_module instead.",
- DeprecationWarning,
- )
- render_module = getattr(meta, "json_module", json)
- else:
- render_module = json
- self.render_module = getattr(meta, "render_module", render_module)
- self.ordered = getattr(meta, "ordered", ordered)
- self.index_errors = getattr(meta, "index_errors", True)
- self.include = getattr(meta, "include", {})
- self.load_only = getattr(meta, "load_only", ())
- self.dump_only = getattr(meta, "dump_only", ())
- self.unknown = getattr(meta, "unknown", RAISE)
- self.register = getattr(meta, "register", True)
-
-
-class BaseSchema(base.SchemaABC):
- """Base schema class with which to define custom schemas.
-
- Example usage:
-
- .. code-block:: python
-
- import datetime as dt
- from marshmallow import Schema, fields
-
- class Album:
- def __init__(self, title, release_date):
- self.title = title
- self.release_date = release_date
-
- class AlbumSchema(Schema):
- title = fields.Str()
- release_date = fields.Date()
-
- # Or, equivalently
- class AlbumSchema2(Schema):
- class Meta:
- fields = ("title", "release_date")
-
- album = Album("Beggars Banquet", dt.date(1968, 12, 6))
- schema = AlbumSchema()
- data = schema.dump(album)
- data # {'release_date': '1968-12-06', 'title': 'Beggars Banquet'}
-
- :param tuple|list only: Whitelist of the declared fields to select when
- instantiating the Schema. If None, all fields are used. Nested fields
- can be represented with dot delimiters.
- :param tuple|list exclude: Blacklist of the declared fields to exclude
- when instantiating the Schema. If a field appears in both `only` and
- `exclude`, it is not used. Nested fields can be represented with dot
- delimiters.
- :param bool many: Should be set to `True` if ``obj`` is a collection
- so that the object will be serialized to a list.
- :param dict context: Optional context passed to :class:`fields.Method` and
- :class:`fields.Function` fields.
- :param tuple|list load_only: Fields to skip during serialization (write-only fields)
- :param tuple|list dump_only: Fields to skip during deserialization (read-only fields)
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
-
- .. versionchanged:: 3.0.0
- `prefix` parameter removed.
-
- .. versionchanged:: 2.0.0
- `__validators__`, `__preprocessors__`, and `__data_handlers__` are removed in favor of
- `marshmallow.decorators.validates_schema`,
- `marshmallow.decorators.pre_load` and `marshmallow.decorators.post_dump`.
- `__accessor__` and `__error_handler__` are deprecated. Implement the
- `handle_error` and `get_attribute` methods instead.
- """
-
- TYPE_MAPPING = {
- str: ma_fields.String,
- bytes: ma_fields.String,
- dt.datetime: ma_fields.DateTime,
- float: ma_fields.Float,
- bool: ma_fields.Boolean,
- tuple: ma_fields.Raw,
- list: ma_fields.Raw,
- set: ma_fields.Raw,
- int: ma_fields.Integer,
- uuid.UUID: ma_fields.UUID,
- dt.time: ma_fields.Time,
- dt.date: ma_fields.Date,
- dt.timedelta: ma_fields.TimeDelta,
- decimal.Decimal: ma_fields.Decimal,
- }
- #: Overrides for default schema-level error messages
- error_messages = {}
-
- _default_error_messages = {
- "type": "Invalid input type.",
- "unknown": "Unknown field.",
- }
-
- OPTIONS_CLASS = SchemaOpts
-
- class Meta:
- """Options object for a Schema.
-
- Example usage: ::
-
- class Meta:
- fields = ("id", "email", "date_created")
- exclude = ("password", "secret_attribute")
-
- Available options:
-
- - ``fields``: Tuple or list of fields to include in the serialized result.
- - ``additional``: Tuple or list of fields to include *in addition* to the
- explicitly declared fields. ``additional`` and ``fields`` are
- mutually-exclusive options.
- - ``include``: Dictionary of additional fields to include in the schema. It is
- usually better to define fields as class variables, but you may need to
- use this option, e.g., if your fields are Python keywords. May be an
- `OrderedDict`.
- - ``exclude``: Tuple or list of fields to exclude in the serialized result.
- Nested fields can be represented with dot delimiters.
- - ``dateformat``: Default format for `Date <fields.Date>` fields.
- - ``datetimeformat``: Default format for `DateTime <fields.DateTime>` fields.
- - ``render_module``: Module to use for `loads <Schema.loads>` and `dumps <Schema.dumps>`.
- Defaults to `json` from the standard library.
- - ``ordered``: If `True`, order serialization output according to the
- order in which fields were declared. Output of `Schema.dump` will be a
- `collections.OrderedDict`.
- - ``index_errors``: If `True`, errors dictionaries will include the index
- of invalid items in a collection.
- - ``load_only``: Tuple or list of fields to exclude from serialized results.
- - ``dump_only``: Tuple or list of fields to exclude from deserialization
- - ``unknown``: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- - ``register``: Whether to register the `Schema` with marshmallow's internal
- class registry. Must be `True` if you intend to refer to this `Schema`
- by class name in `Nested` fields. Only set this to `False` when memory
- usage is critical. Defaults to `True`.
- """
-
- pass
-
- def __init__(
- self,
- *,
- only=None,
- exclude=(),
- many=False,
- context=None,
- load_only=(),
- dump_only=(),
- partial=False,
- unknown=None
- ):
- # Raise error if only or exclude is passed as string, not list of strings
- if only is not None and not is_collection(only):
- raise StringNotCollectionError('"only" should be a list of strings')
- if exclude is not None and not is_collection(exclude):
- raise StringNotCollectionError('"exclude" should be a list of strings')
- # copy declared fields from metaclass
- self.declared_fields = copy.deepcopy(self._declared_fields)
- self.many = many
- self.only = only
- self.exclude = set(self.opts.exclude) | set(exclude)
- self.ordered = self.opts.ordered
- self.load_only = set(load_only) or set(self.opts.load_only)
- self.dump_only = set(dump_only) or set(self.opts.dump_only)
- self.partial = partial
- self.unknown = unknown or self.opts.unknown
- self.context = context or {}
- self._normalize_nested_options()
- #: Dictionary mapping field_names -> :class:`Field` objects
- self.fields = self._init_fields()
- self.dump_fields, self.load_fields = self.dict_class(), self.dict_class()
- for field_name, field_obj in self.fields.items():
- if field_obj.load_only:
- self.load_fields[field_name] = field_obj
- elif field_obj.dump_only:
- self.dump_fields[field_name] = field_obj
- else:
- self.load_fields[field_name] = field_obj
- self.dump_fields[field_name] = field_obj
- messages = {}
- messages.update(self._default_error_messages)
- for cls in reversed(self.__class__.__mro__):
- messages.update(getattr(cls, "error_messages", {}))
- messages.update(self.error_messages or {})
- self.error_messages = messages
-
- def __repr__(self):
- return "<{ClassName}(many={self.many})>".format(
- ClassName=self.__class__.__name__, self=self
- )
-
- @property
- def dict_class(self):
- return OrderedDict if self.ordered else dict
-
- @property
- def set_class(self):
- return OrderedSet if self.ordered else set
-
- @classmethod
- def from_dict(
- cls, fields: typing.Dict[str, ma_fields.Field], *, name: str = "GeneratedSchema"
- ) -> typing.Type["Schema"]:
- """Generate a `Schema` class given a dictionary of fields.
-
- .. code-block:: python
-
- from marshmallow import Schema, fields
-
- PersonSchema = Schema.from_dict({"name": fields.Str()})
- print(PersonSchema().load({"name": "David"})) # => {'name': 'David'}
-
- Generated schemas are not added to the class registry and therefore cannot
- be referred to by name in `Nested` fields.
-
- :param dict fields: Dictionary mapping field names to field instances.
- :param str name: Optional name for the class, which will appear in
- the ``repr`` for the class.
-
- .. versionadded:: 3.0.0
- """
- attrs = fields.copy()
- attrs["Meta"] = type(
- "GeneratedMeta", (getattr(cls, "Meta", object),), {"register": False}
- )
- schema_cls = type(name, (cls,), attrs)
- return schema_cls
-
- ##### Override-able methods #####
-
- def handle_error(self, error, data, *, many, **kwargs):
- """Custom error handler function for the schema.
-
- :param ValidationError error: The `ValidationError` raised during (de)serialization.
- :param data: The original input data.
- :param bool many: Value of ``many`` on dump or load.
- :param bool partial: Value of ``partial`` on load.
-
- .. versionadded:: 2.0.0
-
- .. versionchanged:: 3.0.0rc9
- Receives `many` and `partial` (on deserialization) as keyword arguments.
- """
- pass
-
- def get_attribute(self, obj, attr, default):
- """Defines how to pull values from an object to serialize.
-
- .. versionadded:: 2.0.0
-
- .. versionchanged:: 3.0.0a1
- Changed position of ``obj`` and ``attr``.
- """
- return get_value(obj, attr, default)
-
- ##### Serialization/Deserialization API #####
-
- @staticmethod
- def _call_and_store(getter_func, data, *, field_name, error_store, index=None):
- """Call ``getter_func`` with ``data`` as its argument, and store any `ValidationErrors`.
-
- :param callable getter_func: Function for getting the serialized/deserialized
- value from ``data``.
- :param data: The data passed to ``getter_func``.
- :param str field_name: Field name.
- :param int index: Index of the item being validated, if validating a collection,
- otherwise `None`.
- """
- try:
- value = getter_func(data)
- except ValidationError as error:
- error_store.store_error(error.messages, field_name, index=index)
- # When a Nested field fails validation, the marshalled data is stored
- # on the ValidationError's valid_data attribute
- return error.valid_data or missing
- return value
-
- def _serialize(self, obj, *, many=False):
- """Serialize ``obj``.
-
- :param obj: The object(s) to serialize.
- :param bool many: `True` if ``data`` should be serialized as a collection.
- :return: A dictionary of the serialized data
-
- .. versionchanged:: 1.0.0
- Renamed from ``marshal``.
- """
- if many and obj is not None:
- return [self._serialize(d, many=False) for d in obj]
- ret = self.dict_class()
- for attr_name, field_obj in self.dump_fields.items():
- value = field_obj.serialize(attr_name, obj, accessor=self.get_attribute)
- if value is missing:
- continue
- key = field_obj.data_key or attr_name
- ret[key] = value
- return ret
-
- def dump(self, obj, *, many=None):
- """Serialize an object to native Python data types according to this
- Schema's fields.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :return: A dict of serialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the serialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if ``obj`` is invalid.
- .. versionchanged:: 3.0.0rc9
- Validation no longer occurs upon serialization.
- """
- many = self.many if many is None else bool(many)
- if many and is_iterable_but_not_string(obj):
- obj = list(obj)
-
- if self._has_processors(PRE_DUMP):
- processed_obj = self._invoke_dump_processors(
- PRE_DUMP, obj, many=many, original_data=obj
- )
- else:
- processed_obj = obj
-
- result = self._serialize(processed_obj, many=many)
-
- if self._has_processors(POST_DUMP):
- result = self._invoke_dump_processors(
- POST_DUMP, result, many=many, original_data=obj
- )
-
- return result
-
- def dumps(self, obj, *args, many=None, **kwargs):
- """Same as :meth:`dump`, except return a JSON-encoded string.
-
- :param obj: The object to serialize.
- :param bool many: Whether to serialize `obj` as a collection. If `None`, the value
- for `self.many` is used.
- :return: A ``json`` string
- :rtype: str
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the serialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if ``obj`` is invalid.
- """
- serialized = self.dump(obj, many=many)
- return self.opts.render_module.dumps(serialized, *args, **kwargs)
-
- def _deserialize(
- self, data, *, error_store, many=False, partial=False, unknown=RAISE, index=None
- ):
- """Deserialize ``data``.
-
- :param dict data: The data to deserialize.
- :param ErrorStore error_store: Structure to store errors.
- :param bool many: `True` if ``data`` should be deserialized as a collection.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- :param int index: Index of the item being serialized (for storing errors) if
- serializing a collection, otherwise `None`.
- :return: A dictionary of the deserialized data.
- """
- index_errors = self.opts.index_errors
- index = index if index_errors else None
- if many:
- if not is_collection(data):
- error_store.store_error([self.error_messages["type"]], index=index)
- ret = []
- else:
- ret = [
- self._deserialize(
- d,
- error_store=error_store,
- many=False,
- partial=partial,
- unknown=unknown,
- index=idx,
- )
- for idx, d in enumerate(data)
- ]
- return ret
- ret = self.dict_class()
- # Check data is a dict
- if not isinstance(data, Mapping):
- error_store.store_error([self.error_messages["type"]], index=index)
- else:
- partial_is_collection = is_collection(partial)
- for attr_name, field_obj in self.load_fields.items():
- field_name = field_obj.data_key or attr_name
- raw_value = data.get(field_name, missing)
- if raw_value is missing:
- # Ignore missing field if we're allowed to.
- if partial is True or (
- partial_is_collection and attr_name in partial
- ):
- continue
- d_kwargs = {}
- # Allow partial loading of nested schemas.
- if partial_is_collection:
- prefix = field_name + "."
- len_prefix = len(prefix)
- sub_partial = [
- f[len_prefix:] for f in partial if f.startswith(prefix)
- ]
- d_kwargs["partial"] = sub_partial
- else:
- d_kwargs["partial"] = partial
- getter = lambda val: field_obj.deserialize(
- val, field_name, data, **d_kwargs
- )
- value = self._call_and_store(
- getter_func=getter,
- data=raw_value,
- field_name=field_name,
- error_store=error_store,
- index=index,
- )
- if value is not missing:
- key = field_obj.attribute or attr_name
- set_value(ret, key, value)
- if unknown != EXCLUDE:
- fields = {
- field_obj.data_key or field_name
- for field_name, field_obj in self.load_fields.items()
- }
- for key in set(data) - fields:
- value = data[key]
- if unknown == INCLUDE:
- set_value(ret, key, value)
- elif unknown == RAISE:
- error_store.store_error(
- [self.error_messages["unknown"]],
- key,
- (index if index_errors else None),
- )
- return ret
-
- def load(self, data, *, many=None, partial=None, unknown=None):
- """Deserialize a data structure to an object defined by this Schema's fields.
-
- :param dict data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :return: A dict of deserialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the deserialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if invalid data are passed.
- """
- return self._do_load(
- data, many=many, partial=partial, unknown=unknown, postprocess=True
- )
-
- def loads(self, json_data, *, many=None, partial=None, unknown=None, **kwargs):
- """Same as :meth:`load`, except it takes a JSON string as input.
-
- :param str json_data: A JSON string of the data to deserialize.
- :param bool many: Whether to deserialize `obj` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :return: A dict of deserialized data
- :rtype: dict
-
- .. versionadded:: 1.0.0
- .. versionchanged:: 3.0.0b7
- This method returns the deserialized data rather than a ``(data, errors)`` duple.
- A :exc:`ValidationError <marshmallow.exceptions.ValidationError>` is raised
- if invalid data are passed.
- """
- data = self.opts.render_module.loads(json_data, **kwargs)
- return self.load(data, many=many, partial=partial, unknown=unknown)
-
- def _run_validator(
- self,
- validator_func,
- output,
- *,
- original_data,
- error_store,
- many,
- partial,
- pass_original,
- index=None
- ):
- try:
- if pass_original: # Pass original, raw data (before unmarshalling)
- validator_func(output, original_data, partial=partial, many=many)
- else:
- validator_func(output, partial=partial, many=many)
- except ValidationError as err:
- error_store.store_error(err.messages, err.field_name, index=index)
-
- def validate(self, data, *, many=None, partial=None):
- """Validate `data` against the schema, returning a dictionary of
- validation errors.
-
- :param dict data: The data to validate.
- :param bool many: Whether to validate `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to ignore missing fields and not require
- any fields declared. Propagates down to ``Nested`` fields as well. If
- its value is an iterable, only missing fields listed in that iterable
- will be ignored. Use dot delimiters to specify nested fields.
- :return: A dictionary of validation errors.
- :rtype: dict
-
- .. versionadded:: 1.1.0
- """
- try:
- self._do_load(data, many=many, partial=partial, postprocess=False)
- except ValidationError as exc:
- return exc.messages
- return {}
-
- ##### Private Helpers #####
-
- def _do_load(
- self, data, *, many=None, partial=None, unknown=None, postprocess=True
- ):
- """Deserialize `data`, returning the deserialized result.
-
- :param data: The data to deserialize.
- :param bool many: Whether to deserialize `data` as a collection. If `None`, the
- value for `self.many` is used.
- :param bool|tuple partial: Whether to validate required fields. If its
- value is an iterable, only fields listed in that iterable will be
- ignored will be allowed missing. If `True`, all fields will be allowed missing.
- If `None`, the value for `self.partial` is used.
- :param unknown: Whether to exclude, include, or raise an error for unknown
- fields in the data. Use `EXCLUDE`, `INCLUDE` or `RAISE`.
- If `None`, the value for `self.unknown` is used.
- :param bool postprocess: Whether to run post_load methods..
- :return: A dict of deserialized data
- :rtype: dict
- """
- error_store = ErrorStore()
- errors = {}
- many = self.many if many is None else bool(many)
- unknown = unknown or self.unknown
- if partial is None:
- partial = self.partial
- # Run preprocessors
- if self._has_processors(PRE_LOAD):
- try:
- processed_data = self._invoke_load_processors(
- PRE_LOAD, data, many=many, original_data=data, partial=partial
- )
- except ValidationError as err:
- errors = err.normalized_messages()
- result = None
- else:
- processed_data = data
- if not errors:
- # Deserialize data
- result = self._deserialize(
- processed_data,
- error_store=error_store,
- many=many,
- partial=partial,
- unknown=unknown,
- )
- # Run field-level validation
- self._invoke_field_validators(
- error_store=error_store, data=result, many=many
- )
- # Run schema-level validation
- if self._has_processors(VALIDATES_SCHEMA):
- field_errors = bool(error_store.errors)
- self._invoke_schema_validators(
- error_store=error_store,
- pass_many=True,
- data=result,
- original_data=data,
- many=many,
- partial=partial,
- field_errors=field_errors,
- )
- self._invoke_schema_validators(
- error_store=error_store,
- pass_many=False,
- data=result,
- original_data=data,
- many=many,
- partial=partial,
- field_errors=field_errors,
- )
- errors = error_store.errors
- # Run post processors
- if not errors and postprocess and self._has_processors(POST_LOAD):
- try:
- result = self._invoke_load_processors(
- POST_LOAD,
- result,
- many=many,
- original_data=data,
- partial=partial,
- )
- except ValidationError as err:
- errors = err.normalized_messages()
- if errors:
- exc = ValidationError(errors, data=data, valid_data=result)
- self.handle_error(exc, data, many=many, partial=partial)
- raise exc
-
- return result
-
- def _normalize_nested_options(self):
- """Apply then flatten nested schema options"""
- if self.only is not None:
- # Apply the only option to nested fields.
- self.__apply_nested_option("only", self.only, "intersection")
- # Remove the child field names from the only option.
- self.only = self.set_class([field.split(".", 1)[0] for field in self.only])
- if self.exclude:
- # Apply the exclude option to nested fields.
- self.__apply_nested_option("exclude", self.exclude, "union")
- # Remove the parent field names from the exclude option.
- self.exclude = self.set_class(
- [field for field in self.exclude if "." not in field]
- )
-
- def __apply_nested_option(self, option_name, field_names, set_operation):
- """Apply nested options to nested fields"""
- # Split nested field names on the first dot.
- nested_fields = [name.split(".", 1) for name in field_names if "." in name]
- # Partition the nested field names by parent field.
- nested_options = defaultdict(list)
- for parent, nested_names in nested_fields:
- nested_options[parent].append(nested_names)
- # Apply the nested field options.
- for key, options in iter(nested_options.items()):
- new_options = self.set_class(options)
- original_options = getattr(self.declared_fields[key], option_name, ())
- if original_options:
- if set_operation == "union":
- new_options |= self.set_class(original_options)
- if set_operation == "intersection":
- new_options &= self.set_class(original_options)
- setattr(self.declared_fields[key], option_name, new_options)
-
- def _init_fields(self):
- """Update fields based on schema options."""
- if self.opts.fields:
- available_field_names = self.set_class(self.opts.fields)
- else:
- available_field_names = self.set_class(self.declared_fields.keys())
- if self.opts.additional:
- available_field_names |= self.set_class(self.opts.additional)
-
- invalid_fields = self.set_class()
-
- if self.only is not None:
- # Return only fields specified in only option
- field_names = self.set_class(self.only)
-
- invalid_fields |= field_names - available_field_names
- else:
- field_names = available_field_names
-
- # If "exclude" option or param is specified, remove those fields.
- if self.exclude:
- # Note that this isn't available_field_names, since we want to
- # apply "only" for the actual calculation.
- field_names = field_names - self.exclude
- invalid_fields |= self.exclude - available_field_names
-
- if invalid_fields:
- message = "Invalid fields for {}: {}.".format(self, invalid_fields)
- raise ValueError(message)
-
- fields_dict = self.dict_class()
- for field_name in field_names:
- field_obj = self.declared_fields.get(field_name, ma_fields.Inferred())
- self._bind_field(field_name, field_obj)
- fields_dict[field_name] = field_obj
-
- dump_data_keys = [
- obj.data_key or name
- for name, obj in fields_dict.items()
- if not obj.load_only
- ]
- if len(dump_data_keys) != len(set(dump_data_keys)):
- data_keys_duplicates = {
- x for x in dump_data_keys if dump_data_keys.count(x) > 1
- }
- raise ValueError(
- "The data_key argument for one or more fields collides "
- "with another field's name or data_key argument. "
- "Check the following field names and "
- "data_key arguments: {}".format(list(data_keys_duplicates))
- )
-
- load_attributes = [
- obj.attribute or name
- for name, obj in fields_dict.items()
- if not obj.dump_only
- ]
- if len(load_attributes) != len(set(load_attributes)):
- attributes_duplicates = {
- x for x in load_attributes if load_attributes.count(x) > 1
- }
- raise ValueError(
- "The attribute argument for one or more fields collides "
- "with another field's name or attribute argument. "
- "Check the following field names and "
- "attribute arguments: {}".format(list(attributes_duplicates))
- )
-
- return fields_dict
-
- def on_bind_field(self, field_name, field_obj):
- """Hook to modify a field when it is bound to the `Schema`.
-
- No-op by default.
- """
- return None
-
- def _bind_field(self, field_name, field_obj):
- """Bind field to the schema, setting any necessary attributes on the
- field (e.g. parent and name).
-
- Also set field load_only and dump_only values if field_name was
- specified in ``class Meta``.
- """
- try:
- if field_name in self.load_only:
- field_obj.load_only = True
- if field_name in self.dump_only:
- field_obj.dump_only = True
- field_obj._bind_to_schema(field_name, self)
- self.on_bind_field(field_name, field_obj)
- except TypeError as error:
- # field declared as a class, not an instance
- if isinstance(field_obj, type) and issubclass(field_obj, base.FieldABC):
- msg = (
- 'Field for "{}" must be declared as a '
- "Field instance, not a class. "
- 'Did you mean "fields.{}()"?'.format(field_name, field_obj.__name__)
- )
- raise TypeError(msg) from error
-
- @lru_cache(maxsize=8)
- def _has_processors(self, tag):
- return self._hooks[(tag, True)] or self._hooks[(tag, False)]
-
- def _invoke_dump_processors(self, tag, data, *, many, original_data=None):
- # The pass_many post-dump processors may do things like add an envelope, so
- # invoke those after invoking the non-pass_many processors which will expect
- # to get a list of items.
- data = self._invoke_processors(
- tag, pass_many=False, data=data, many=many, original_data=original_data
- )
- data = self._invoke_processors(
- tag, pass_many=True, data=data, many=many, original_data=original_data
- )
- return data
-
- def _invoke_load_processors(self, tag, data, *, many, original_data, partial):
- # This has to invert the order of the dump processors, so run the pass_many
- # processors first.
- data = self._invoke_processors(
- tag,
- pass_many=True,
- data=data,
- many=many,
- original_data=original_data,
- partial=partial,
- )
- data = self._invoke_processors(
- tag,
- pass_many=False,
- data=data,
- many=many,
- original_data=original_data,
- partial=partial,
- )
- return data
-
- def _invoke_field_validators(self, *, error_store, data, many):
- for attr_name in self._hooks[VALIDATES]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_hook__[VALIDATES]
- field_name = validator_kwargs["field_name"]
-
- try:
- field_obj = self.fields[field_name]
- except KeyError as error:
- if field_name in self.declared_fields:
- continue
- raise ValueError(
- '"{}" field does not exist.'.format(field_name)
- ) from error
-
- if many:
- for idx, item in enumerate(data):
- try:
- value = item[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = self._call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.data_key or field_name,
- error_store=error_store,
- index=(idx if self.opts.index_errors else None),
- )
- if validated_value is missing:
- data[idx].pop(field_name, None)
- else:
- try:
- value = data[field_obj.attribute or field_name]
- except KeyError:
- pass
- else:
- validated_value = self._call_and_store(
- getter_func=validator,
- data=value,
- field_name=field_obj.data_key or field_name,
- error_store=error_store,
- )
- if validated_value is missing:
- data.pop(field_name, None)
-
- def _invoke_schema_validators(
- self,
- *,
- error_store,
- pass_many,
- data,
- original_data,
- many,
- partial,
- field_errors=False
- ):
- for attr_name in self._hooks[(VALIDATES_SCHEMA, pass_many)]:
- validator = getattr(self, attr_name)
- validator_kwargs = validator.__marshmallow_hook__[
- (VALIDATES_SCHEMA, pass_many)
- ]
- if field_errors and validator_kwargs["skip_on_field_errors"]:
- continue
- pass_original = validator_kwargs.get("pass_original", False)
-
- if many and not pass_many:
- for idx, (item, orig) in enumerate(zip(data, original_data)):
- self._run_validator(
- validator,
- item,
- original_data=orig,
- error_store=error_store,
- many=many,
- partial=partial,
- index=idx,
- pass_original=pass_original,
- )
- else:
- self._run_validator(
- validator,
- data,
- original_data=original_data,
- error_store=error_store,
- many=many,
- pass_original=pass_original,
- partial=partial,
- )
-
- def _invoke_processors(
- self, tag, *, pass_many, data, many, original_data=None, **kwargs
- ):
- key = (tag, pass_many)
- for attr_name in self._hooks[key]:
- # This will be a bound method.
- processor = getattr(self, attr_name)
-
- processor_kwargs = processor.__marshmallow_hook__[key]
- pass_original = processor_kwargs.get("pass_original", False)
-
- if pass_many:
- if pass_original:
- data = processor(data, original_data, many=many, **kwargs)
- else:
- data = processor(data, many=many, **kwargs)
- elif many:
- if pass_original:
- data = [
- processor(item, original, many=many, **kwargs)
- for item, original in zip(data, original_data)
- ]
- else:
- data = [processor(item, many=many, **kwargs) for item in data]
- else:
- if pass_original:
- data = processor(data, original_data, many=many, **kwargs)
- else:
- data = processor(data, many=many, **kwargs)
- return data
-
-
-class Schema(BaseSchema, metaclass=SchemaMeta):
- __doc__ = BaseSchema.__doc__
+or getattr(self.root.opts, self.SCHEMA_OPTS_VAR_NAME)
\ No newline at end of file
+ source /opt/miniconda3/bin/activate
++ _CONDA_ROOT=/opt/miniconda3
++ . /opt/miniconda3/etc/profile.d/conda.sh
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ '[' -z x ']'
++ conda activate
++ local cmd=activate
++ case "$cmd" in
++ __conda_activate activate
++ '[' -n '' ']'
++ local ask_conda
+++ PS1='(testbed) '
+++ __conda_exe shell.posix activate
+++ /opt/miniconda3/bin/conda shell.posix activate
++ ask_conda='PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''4'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_3='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ eval 'PS1='\''(base) '\''
export PATH='\''/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3'\''
export CONDA_SHLVL='\''4'\''
export CONDA_DEFAULT_ENV='\''base'\''
export CONDA_PROMPT_MODIFIER='\''(base) '\''
export CONDA_PREFIX_3='\''/opt/miniconda3/envs/testbed'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+++ PS1='(base) '
+++ export PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ PATH=/opt/miniconda3/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
+++ export CONDA_PREFIX=/opt/miniconda3
+++ CONDA_PREFIX=/opt/miniconda3
+++ export CONDA_SHLVL=4
+++ CONDA_SHLVL=4
+++ export CONDA_DEFAULT_ENV=base
+++ CONDA_DEFAULT_ENV=base
+++ export 'CONDA_PROMPT_MODIFIER=(base) '
+++ CONDA_PROMPT_MODIFIER='(base) '
+++ export CONDA_PREFIX_3=/opt/miniconda3/envs/testbed
+++ CONDA_PREFIX_3=/opt/miniconda3/envs/testbed
+++ export CONDA_EXE=/opt/miniconda3/bin/conda
+++ CONDA_EXE=/opt/miniconda3/bin/conda
+++ export _CE_M=
+++ _CE_M=
+++ export _CE_CONDA=
+++ _CE_CONDA=
+++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ __conda_hashr
++ '[' -n '' ']'
++ '[' -n '' ']'
++ hash -r
+ conda activate testbed
+ local cmd=activate
+ case "$cmd" in
+ __conda_activate activate testbed
+ '[' -n '' ']'
+ local ask_conda
++ PS1='(base) '
++ __conda_exe shell.posix activate testbed
++ /opt/miniconda3/bin/conda shell.posix activate testbed
+ ask_conda='PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''5'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_4='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
+ eval 'PS1='\''(testbed) '\''
export PATH='\''/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'\''
export CONDA_PREFIX='\''/opt/miniconda3/envs/testbed'\''
export CONDA_SHLVL='\''5'\''
export CONDA_DEFAULT_ENV='\''testbed'\''
export CONDA_PROMPT_MODIFIER='\''(testbed) '\''
export CONDA_PREFIX_4='\''/opt/miniconda3'\''
export CONDA_EXE='\''/opt/miniconda3/bin/conda'\''
export _CE_M='\'''\''
export _CE_CONDA='\'''\''
export CONDA_PYTHON_EXE='\''/opt/miniconda3/bin/python'\'''
++ PS1='(testbed) '
++ export PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ PATH=/opt/miniconda3/envs/testbed/bin:/opt/miniconda3/condabin:/opt/miniconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
++ export CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ CONDA_PREFIX=/opt/miniconda3/envs/testbed
++ export CONDA_SHLVL=5
++ CONDA_SHLVL=5
++ export CONDA_DEFAULT_ENV=testbed
++ CONDA_DEFAULT_ENV=testbed
++ export 'CONDA_PROMPT_MODIFIER=(testbed) '
++ CONDA_PROMPT_MODIFIER='(testbed) '
++ export CONDA_PREFIX_4=/opt/miniconda3
++ CONDA_PREFIX_4=/opt/miniconda3
++ export CONDA_EXE=/opt/miniconda3/bin/conda
++ CONDA_EXE=/opt/miniconda3/bin/conda
++ export _CE_M=
++ _CE_M=
++ export _CE_CONDA=
++ _CE_CONDA=
++ export CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
++ CONDA_PYTHON_EXE=/opt/miniconda3/bin/python
+ __conda_hashr
+ '[' -n '' ']'
+ '[' -n '' ']'
+ hash -r
+ python -m pip install -e '.[dev]'
Obtaining file:///testbed
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Checking if build backend supports build_editable: started
Checking if build backend supports build_editable: finished with status 'done'
Getting requirements to build editable: started
Getting requirements to build editable: finished with status 'done'
Preparing editable metadata (pyproject.toml): started
Preparing editable metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: pytest in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==3.0.0) (8.3.5)
Requirement already satisfied: pytz in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==3.0.0) (2025.2)
Requirement already satisfied: simplejson in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==3.0.0) (3.20.1)
Requirement already satisfied: flake8==3.7.8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==3.0.0) (3.7.8)
Requirement already satisfied: flake8-bugbear==19.8.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==3.0.0) (19.8.0)
Requirement already satisfied: pre-commit~=1.17 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==3.0.0) (1.21.0)
Requirement already satisfied: tox in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from marshmallow==3.0.0) (4.25.0)
Requirement already satisfied: entrypoints<0.4.0,>=0.3.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.8->marshmallow==3.0.0) (0.3)
Requirement already satisfied: pyflakes<2.2.0,>=2.1.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.8->marshmallow==3.0.0) (2.1.1)
Requirement already satisfied: pycodestyle<2.6.0,>=2.5.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.8->marshmallow==3.0.0) (2.5.0)
Requirement already satisfied: mccabe<0.7.0,>=0.6.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8==3.7.8->marshmallow==3.0.0) (0.6.1)
Requirement already satisfied: attrs in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from flake8-bugbear==19.8.0->marshmallow==3.0.0) (25.3.0)
Requirement already satisfied: aspy.yaml in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (1.3.0)
Requirement already satisfied: cfgv>=2.0.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (3.4.0)
Requirement already satisfied: identify>=1.0.0 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (2.6.10)
Requirement already satisfied: nodeenv>=0.11.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (1.9.1)
Requirement already satisfied: pyyaml in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (6.0.2)
Requirement already satisfied: six in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (1.17.0)
Requirement already satisfied: toml in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (0.10.2)
Requirement already satisfied: virtualenv>=15.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pre-commit~=1.17->marshmallow==3.0.0) (20.31.1)
Requirement already satisfied: distlib<1,>=0.3.7 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from virtualenv>=15.2->pre-commit~=1.17->marshmallow==3.0.0) (0.3.9)
Requirement already satisfied: filelock<4,>=3.12.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from virtualenv>=15.2->pre-commit~=1.17->marshmallow==3.0.0) (3.18.0)
Requirement already satisfied: platformdirs<5,>=3.9.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from virtualenv>=15.2->pre-commit~=1.17->marshmallow==3.0.0) (4.3.7)
Requirement already satisfied: exceptiongroup>=1.0.0rc8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (1.2.2)
Requirement already satisfied: iniconfig in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (2.1.0)
Requirement already satisfied: packaging in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (25.0)
Requirement already satisfied: pluggy<2,>=1.5 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (1.5.0)
Requirement already satisfied: tomli>=1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from pytest->marshmallow==3.0.0) (2.2.1)
Requirement already satisfied: cachetools>=5.5.1 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (5.5.2)
Requirement already satisfied: chardet>=5.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (5.2.0)
Requirement already satisfied: colorama>=0.4.6 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (0.4.6)
Requirement already satisfied: pyproject-api>=1.8 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (1.9.0)
Requirement already satisfied: typing-extensions>=4.12.2 in /opt/miniconda3/envs/testbed/lib/python3.9/site-packages (from tox->marshmallow==3.0.0) (4.15.0)
Building wheels for collected packages: marshmallow
Building editable for marshmallow (pyproject.toml): started
Building editable for marshmallow (pyproject.toml): finished with status 'done'
Created wheel for marshmallow: filename=marshmallow-3.0.0-0.editable-py2.py3-none-any.whl size=4552 sha256=8eddfa9b407b64bf895cfaf6154f12caaf373b314c5b198e9399f03138ee3006
Stored in directory: /tmp/pip-ephem-wheel-cache-g0rhbmkj/wheels/7d/66/67/70d1ee2124ccf21d601c352e25cdca10f611f7c8b3f9ffb9e4
Successfully built marshmallow
Installing collected packages: marshmallow
Attempting uninstall: marshmallow
Found existing installation: marshmallow 3.0.0
Uninstalling marshmallow-3.0.0:
Successfully uninstalled marshmallow-3.0.0
Successfully installed marshmallow-3.0.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager, possibly rendering your system unusable. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv. Use the --root-user-action option if you know what you are doing and want to suppress this warning.
+ git checkout b40a0f4e33823e6d0f341f7e8684e359a99060d1 tests/test_fields.py
Updated 0 paths from 56ab4168
+ git apply -v -
Checking patch tests/test_fields.py...
Applied patch tests/test_fields.py cleanly.
+ : '>>>>> Start Test Output'
+ pytest -rA tests/test_fields.py
ImportError while loading conftest '/testbed/tests/conftest.py'.
tests/conftest.py:4: in <module>
from tests.base import User, UserSchema, Blog
tests/base.py:9: in <module>
from marshmallow import Schema, fields, post_load, validate, missing
src/marshmallow/__init__.py:1: in <module>
from marshmallow.schema import Schema, SchemaOpts
E File "/testbed/src/marshmallow/schema.py", line 1
E or getattr(self.root.opts, self.SCHEMA_OPTS_VAR_NAME)
E ^
E SyntaxError: invalid syntax
+ : '>>>>> End Test Output'
+ git checkout b40a0f4e33823e6d0f341f7e8684e359a99060d1 tests/test_fields.py
Updated 1 path from 56ab4168
Reference output
diff --git a/src/marshmallow/fields.py b/src/marshmallow/fields.py
--- a/src/marshmallow/fields.py
+++ b/src/marshmallow/fields.py
@@ -1114,7 +1114,7 @@ def _bind_to_schema(self, field_name, schema):
super()._bind_to_schema(field_name, schema)
self.format = (
self.format
- or getattr(schema.opts, self.SCHEMA_OPTS_VAR_NAME)
+ or getattr(self.root.opts, self.SCHEMA_OPTS_VAR_NAME)
or self.DEFAULT_FORMAT
)
Rerun config
Reuse this benchmark setup
Copy the config or relaunch the same run shape.
Benchmark
swe_bench / lite / dev
Concurrency
2
Agent image
agentarena-build:b865fe806fba4e868a5c821a673dae5f
Build source
https://github.com/jiviny/Benchmark-Testing@HEAD
Show exact run metadata
2 pinned instances, 2 sandboxes, 1 reported models.
Pinned instance ids
marshmallow-code__marshmallow-1359marshmallow-code__marshmallow-1343Sandbox ids
32d27182-ed3e-4c37-8084-d3b787d1e7e1d775c89c-8908-41be-af7e-7b37ac23153cRun started
Mar 31, 2026, 2:29 AM UTC
Run completed
Mar 31, 2026, 2:32 AM UTC
Reported models
claude-sonnet-4-5-20250929Operational details
Build, live sandboxes, and recent events
Collapsed by default for finished runs.
Build Completed2 events
Operational details
Build, live sandboxes, and recent events
Collapsed by default for finished runs.
Agent build
Status: Completed
Source https://github.com/jiviny/Benchmark-Testing@HEAD | agentarena-build:b865fe806fba4e868a5c821a673dae5f
Started Mar 31, 2026, 2:29 AM UTC | Completed Mar 31, 2026, 2:29 AM UTC
Show build log
Cloning into '/tmp/agentarena-build-ub8efjus/repo'...
Sending build context to Docker daemon 99.84kB
Step 1/5 : FROM python:3.11-slim
---> e67db9b14d09
Step 2/5 : WORKDIR /app
---> Using cache
---> 7c181af3b0fb
Step 3/5 : COPY . /app
---> Using cache
---> deb3b8c7eb33
Step 4/5 : RUN if [ -f requirements.txt ]; then python -m pip install --no-cache-dir -r requirements.txt; fi
---> Using cache
---> 9d5519fae151
Step 5/5 : CMD ["python", "/app/agent.py"]
---> Using cache
---> 049bc4a23aab
Successfully built 049bc4a23aab
Successfully tagged agentarena-build:b865fe806fba4e868a5c821a673dae5f
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
BuildKit is currently disabled; enable it by removing the DOCKER_BUILDKIT=0
environment-variable.Sandbox activity
Active sandboxes
Recent events
Latest run activity
marshmallow-code__marshmallow-1343
Not resolved by official SWE-bench grading. Fail-to-pass: 0%. Pass-to-pass: 0%.
2:32 AM
marshmallow-code__marshmallow-1359
Not resolved by official SWE-bench grading. Fail-to-pass: 0%. Pass-to-pass: 0%.
2:32 AM