Skip to content

⚡️ Speed up function process_result by 12% in PR #3819 (feature/defer) #3938

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

codeflash-ai[bot]
Copy link
Contributor

@codeflash-ai codeflash-ai bot commented Jul 1, 2025

⚡️ This pull request contains optimizations for PR #3819

If you approve this dependent PR, these changes will be merged into the original PR branch feature/defer.

This PR will be automatically closed if the original PR is merged.


📄 12% (0.12x) speedup for process_result in strawberry/http/__init__.py

⏱️ Runtime : 100 microseconds 89.8 microseconds (best of 153 runs)

📝 Explanation and details

Here is an optimized version. Improvements.

  • Removed the unnecessary intermediate variables (errors, extensions) that are only used once.
  • Avoided repeated dictionary unpacking by building the dictionary with direct assignments and only adding keys if necessary.
  • Used if-else statements for error and extensions blocks for slight speed-ups over dictionary unpacking on small dicts.

Rewritten code.

This version allocates less intermediate data and does not do work unless necessary.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 40 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage undefined
🌀 Generated Regression Tests and Runtime
from __future__ import annotations

from typing import Any, Dict, List, Optional

# imports
import pytest  # used for our unit tests
from strawberry.http.__init__ import process_result

# --- Minimal stubs for types used in process_result ---

# Simulate the GraphQLHTTPResponse type as a dict
GraphQLHTTPResponse = Dict[str, Any]

# Simulate ResultType (with .data, .errors, .extensions)
class DummyError:
    def __init__(self, formatted):
        self.formatted = formatted

class DummyResult:
    def __init__(self, data=None, errors=None, extensions=None):
        self.data = data
        self.errors = errors
        self.extensions = extensions

# Simulate GraphQLIncrementalExecutionResults as a marker class
class GraphQLIncrementalExecutionResults:
    pass

ResultType = Any  # For our test purposes
from strawberry.http.__init__ import process_result

# --- Unit tests ---

# 1. Basic Test Cases

def test_process_result_basic_only_data():
    # Only data, no errors, no extensions
    result = DummyResult(data={"foo": "bar"})
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_basic_with_errors():
    # Data and errors, no extensions
    errors = [DummyError({"msg": "fail1"}), DummyError({"msg": "fail2"})]
    result = DummyResult(data=None, errors=errors)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_basic_with_extensions():
    # Data and extensions, no errors
    extensions = {"cost": 42}
    result = DummyResult(data={"foo": "bar"}, extensions=extensions)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_basic_with_errors_and_extensions():
    # Data, errors, and extensions
    errors = [DummyError({"msg": "fail"})]
    extensions = {"cost": 99}
    result = DummyResult(data=123, errors=errors, extensions=extensions)
    codeflash_output = process_result(result); out = codeflash_output

# 2. Edge Test Cases


def test_process_result_none_data():
    # data is None, no errors, no extensions
    result = DummyResult(data=None)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_empty_errors_list():
    # errors is an empty list (should not include "errors" key)
    result = DummyResult(data="abc", errors=[])
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_empty_extensions_dict():
    # extensions is an empty dict (should include "extensions" key)
    result = DummyResult(data="abc", extensions={})
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_errors_is_none():
    # errors is None (should not include "errors" key)
    result = DummyResult(data=1, errors=None)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_extensions_is_none():
    # extensions is None (should not include "extensions" key)
    result = DummyResult(data=2, extensions=None)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_errors_and_extensions_are_none():
    # Both errors and extensions are None
    result = DummyResult(data=3, errors=None, extensions=None)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_errors_and_extensions_are_empty():
    # Both errors and extensions are empty
    result = DummyResult(data=4, errors=[], extensions={})
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_errors_with_none_formatted():
    # errors list contains an error with formatted=None
    errors = [DummyError(None)]
    result = DummyResult(data="x", errors=errors)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_extensions_with_various_types():
    # extensions is a non-dict type (should still be included)
    extensions = [1, 2, 3]
    result = DummyResult(data="abc", extensions=extensions)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_errors_with_non_dict_formatted():
    # errors list contains errors with formatted as strings, ints, dicts
    errors = [DummyError("err1"), DummyError(42), DummyError({"msg": "err3"})]
    result = DummyResult(data="data", errors=errors)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_errors_with_mixed_none_and_values():
    # errors list contains None and valid DummyError
    errors = [DummyError("err"), DummyError(None)]
    result = DummyResult(data="d", errors=errors)
    codeflash_output = process_result(result); out = codeflash_output

# 3. Large Scale Test Cases

def test_process_result_large_errors_list():
    # Large number of errors
    errors = [DummyError({"idx": i}) for i in range(1000)]
    result = DummyResult(data="big", errors=errors)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_large_extensions_dict():
    # Large extensions dict
    extensions = {f"key{i}": i for i in range(1000)}
    result = DummyResult(data="huge", extensions=extensions)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_large_data_structure():
    # Large data structure in data
    data = {"values": list(range(1000))}
    result = DummyResult(data=data)
    codeflash_output = process_result(result); out = codeflash_output

def test_process_result_large_errors_and_extensions():
    # Large errors and extensions together
    errors = [DummyError({"idx": i}) for i in range(500)]
    extensions = {f"e{i}": i for i in range(500)}
    result = DummyResult(data="combo", errors=errors, extensions=extensions)
    codeflash_output = process_result(result); out = codeflash_output
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

from __future__ import annotations

from types import SimpleNamespace

# imports
import pytest  # used for our unit tests
from strawberry.http.__init__ import process_result


# Dummy stand-ins for the actual types, since we don't have strawberry's internals.
# In real code, these would be imported from strawberry.
class GraphQLIncrementalExecutionResults:
    pass

# We define GraphQLHTTPResponse as a type alias for dict for this test suite.
GraphQLHTTPResponse = dict

# Dummy error class to mimic the .formatted attribute
class DummyError:
    def __init__(self, formatted):
        self.formatted = formatted

# Dummy result type to mimic the required interface
class DummyResult:
    def __init__(self, data=None, errors=None, extensions=None):
        self.data = data
        self.errors = errors
        self.extensions = extensions
from strawberry.http.__init__ import process_result

# unit tests

# 1. Basic Test Cases

def test_basic_data_only():
    # Test with only data, no errors or extensions
    result = DummyResult(data={"foo": "bar"})
    codeflash_output = process_result(result); output = codeflash_output

def test_basic_data_and_errors():
    # Test with data and one error
    error = DummyError(formatted={"message": "An error occurred"})
    result = DummyResult(data={"foo": "bar"}, errors=[error])
    codeflash_output = process_result(result); output = codeflash_output

def test_basic_data_and_extensions():
    # Test with data and extensions
    result = DummyResult(data={"foo": "bar"}, extensions={"ext": 1})
    codeflash_output = process_result(result); output = codeflash_output

def test_basic_data_errors_and_extensions():
    # Test with data, errors, and extensions
    error = DummyError(formatted={"message": "Oops"})
    result = DummyResult(data={"foo": "bar"}, errors=[error], extensions={"ext": 2})
    codeflash_output = process_result(result); output = codeflash_output

def test_basic_no_data():
    # Test with no data, but with errors and extensions
    error = DummyError(formatted={"message": "No data"})
    result = DummyResult(data=None, errors=[error], extensions={"ext": 3})
    codeflash_output = process_result(result); output = codeflash_output

# 2. Edge Test Cases

def test_edge_empty_errors_and_extensions():
    # Test with empty errors list and empty extensions dict
    result = DummyResult(data={"foo": "bar"}, errors=[], extensions={})
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_errors_is_none_extensions_is_none():
    # Test with errors=None and extensions=None
    result = DummyResult(data={"foo": "bar"}, errors=None, extensions=None)
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_errors_is_empty_list_extensions_present():
    # Test with errors=[], extensions present
    result = DummyResult(data={"foo": "bar"}, errors=[], extensions={"a": 1})
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_errors_present_extensions_empty_dict():
    # Test with errors present, extensions={}
    error = DummyError(formatted={"message": "fail"})
    result = DummyResult(data={"foo": "bar"}, errors=[error], extensions={})
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_multiple_errors():
    # Test with multiple errors
    errors = [DummyError(formatted={"message": f"error {i}"}) for i in range(3)]
    result = DummyResult(data={"foo": "bar"}, errors=errors)
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_data_is_none():
    # Test with data=None, errors=None, extensions=None
    result = DummyResult(data=None, errors=None, extensions=None)
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_data_is_falsy():
    # Test with data as a falsy value (empty dict)
    result = DummyResult(data={}, errors=None, extensions=None)
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_data_is_zero():
    # Test with data as zero (int)
    result = DummyResult(data=0, errors=None, extensions=None)
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_errors_contains_none():
    # Test with errors list containing a None value (should raise AttributeError)
    result = DummyResult(data={"foo": "bar"}, errors=[None])
    with pytest.raises(AttributeError):
        process_result(result)

def test_edge_extensions_is_none_errors_present():
    # Test with extensions=None and errors present
    error = DummyError(formatted={"message": "err"})
    result = DummyResult(data={"foo": "bar"}, errors=[error], extensions=None)
    codeflash_output = process_result(result); output = codeflash_output

def test_edge_extensions_is_empty_dict_errors_present():
    # Test with extensions as empty dict and errors present
    error = DummyError(formatted={"message": "err"})
    result = DummyResult(data={"foo": "bar"}, errors=[error], extensions={})
    codeflash_output = process_result(result); output = codeflash_output



def test_edge_result_extra_attributes():
    # Test with result having extra attributes
    class ExtraResult:
        def __init__(self):
            self.data = {"foo": "bar"}
            self.errors = None
            self.extensions = None
            self.extra = 123
    result = ExtraResult()
    codeflash_output = process_result(result); output = codeflash_output

# 3. Large Scale Test Cases

def test_large_many_errors():
    # Test with a large number of errors (but < 1000)
    errors = [DummyError(formatted={"message": f"error {i}"}) for i in range(500)]
    result = DummyResult(data={"foo": "bar"}, errors=errors)
    codeflash_output = process_result(result); output = codeflash_output

def test_large_many_extensions():
    # Test with a large extensions dict
    extensions = {f"key{i}": i for i in range(500)}
    result = DummyResult(data={"foo": "bar"}, extensions=extensions)
    codeflash_output = process_result(result); output = codeflash_output

def test_large_data_structure():
    # Test with a large data dict
    data = {f"field{i}": i for i in range(500)}
    result = DummyResult(data=data)
    codeflash_output = process_result(result); output = codeflash_output

def test_large_all_fields():
    # Test with large data, errors, and extensions
    data = {f"field{i}": i for i in range(200)}
    errors = [DummyError(formatted={"message": f"error {i}"}) for i in range(200)]
    extensions = {f"key{i}": i for i in range(200)}
    result = DummyResult(data=data, errors=errors, extensions=extensions)
    codeflash_output = process_result(result); output = codeflash_output
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-pr3819-2025-07-01T11.11.27 and push.

Codeflash

Summary by Sourcery

Optimize process_result performance by refactoring response object construction to reduce unnecessary intermediate data allocations and conditionalize the inclusion of errors and extensions.

Enhancements:

  • Streamline data dictionary assembly in process_result to avoid unpacking and intermediate variables, yielding a ~12% speedup.

Tests:

  • Add comprehensive regression tests for process_result covering combinations of data, errors, and extensions, including edge and large-scale scenarios.

patrick91 and others added 2 commits July 1, 2025 12:07
…fer`)

Here is an optimized version. Improvements.

- Removed the unnecessary intermediate variables (`errors`, `extensions`) that are only used once.
- Avoided repeated dictionary unpacking by building the dictionary with direct assignments and only adding keys if necessary.
- Used `if-else` statements for error and extensions blocks for slight speed-ups over dictionary unpacking on small dicts.

Rewritten code.



This version allocates less intermediate data and does not do work unless necessary.
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Jul 1, 2025
Copy link
Contributor

sourcery-ai bot commented Jul 1, 2025

Reviewer's Guide

This PR optimizes the process_result function by removing unneeded intermediates and replacing repeated dictionary unpacking with targeted conditional assignments, reducing allocations and improving runtime by about 12%.

File-Level Changes

Change Details Files
Streamline initial response dict construction by removing intermediate variables and dict unpacking
  • Removed errors, extensions assignment
  • Initialized data dict with only the data field
  • Eliminated ** unpacking for errors and extensions
strawberry/http/__init__.py
Use conditional if guards to add errors and extensions only when present
  • Added if result.errors to append formatted errors list
  • Added if result.extensions to append extensions object
strawberry/http/__init__.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Summary

Optimized the process_result function in strawberry/http/__init__.py for a 12% performance improvement by streamlining dictionary construction and variable handling.

  • Eliminated intermediate variables for errors and extensions, reducing memory allocations
  • Replaced dictionary unpacking with direct key assignments for better performance
  • Added comprehensive test suite with 40 test cases covering edge cases and large-scale inputs
  • Improved handling of conditional additions to response dictionary
  • Maintains identical functionality while reducing execution time from 100μs to 89.8μs

1 file reviewed, no comments
Edit PR Review Bot Settings | Greptile

Base automatically changed from feature/defer to main July 18, 2025 22:23
@codeflash-ai codeflash-ai bot closed this Jul 18, 2025
Copy link
Contributor Author

codeflash-ai bot commented Jul 18, 2025

This PR has been automatically closed because the original PR #3819 by patrick91 was closed.

@codeflash-ai codeflash-ai bot deleted the codeflash/optimize-pr3819-2025-07-01T11.11.27 branch July 18, 2025 22:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⚡️ codeflash Optimization PR opened by Codeflash AI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant