Using cascading from child to parent objects using JPA
I have the following scenario.
We have two objects in our domain which form a parent and child relationship.
Person is the parent and Role is the child. These two objects extend from the same ancestor.
Instead of modelling the relationship as bi directional the child contains only the object Id of the parent represented as a long.
The parent has a @oneToMany mapping to the child.
The issue we are experiencing is as follows: The domain layer currently creates and persists the child in isolation and then simply updates the id of the parent onto itself.
The problem with this approach however is that the Person that is already loaded into the Persistent Context, does not get refeshed with this new Role. This approach is causing havoc in our application as we attempt to orchastrate several operations within a single transaction. We can't leverage 2nd level caching either. We are looking to create a bidirectional relationship, however I am unclear as to what the best approach to dealing with the child object is.
All the suggestions I have seen indicate that the Parent object should be saved and this should then cascade to the child. It is also suggested that this approach should be followed for all cascading types. So a change to the child is persisted is via the parent.
I'm not disputing this approach however, seeing as there is already an implementation, is it viable to have cascading from the child to the parent? I would imagine that this should be applicable to Merge and Refresh? This seems a viable approach for synchronizing the persistent context with the changes to the object, without needing to change the underlying implementation, beyond adding the @ManyToOne annotations.
I would welcome any comment or recommendations.
java jpa orm persistence eclipselink
add a comment |
I have the following scenario.
We have two objects in our domain which form a parent and child relationship.
Person is the parent and Role is the child. These two objects extend from the same ancestor.
Instead of modelling the relationship as bi directional the child contains only the object Id of the parent represented as a long.
The parent has a @oneToMany mapping to the child.
The issue we are experiencing is as follows: The domain layer currently creates and persists the child in isolation and then simply updates the id of the parent onto itself.
The problem with this approach however is that the Person that is already loaded into the Persistent Context, does not get refeshed with this new Role. This approach is causing havoc in our application as we attempt to orchastrate several operations within a single transaction. We can't leverage 2nd level caching either. We are looking to create a bidirectional relationship, however I am unclear as to what the best approach to dealing with the child object is.
All the suggestions I have seen indicate that the Parent object should be saved and this should then cascade to the child. It is also suggested that this approach should be followed for all cascading types. So a change to the child is persisted is via the parent.
I'm not disputing this approach however, seeing as there is already an implementation, is it viable to have cascading from the child to the parent? I would imagine that this should be applicable to Merge and Refresh? This seems a viable approach for synchronizing the persistent context with the changes to the object, without needing to change the underlying implementation, beyond adding the @ManyToOne annotations.
I would welcome any comment or recommendations.
java jpa orm persistence eclipselink
add a comment |
I have the following scenario.
We have two objects in our domain which form a parent and child relationship.
Person is the parent and Role is the child. These two objects extend from the same ancestor.
Instead of modelling the relationship as bi directional the child contains only the object Id of the parent represented as a long.
The parent has a @oneToMany mapping to the child.
The issue we are experiencing is as follows: The domain layer currently creates and persists the child in isolation and then simply updates the id of the parent onto itself.
The problem with this approach however is that the Person that is already loaded into the Persistent Context, does not get refeshed with this new Role. This approach is causing havoc in our application as we attempt to orchastrate several operations within a single transaction. We can't leverage 2nd level caching either. We are looking to create a bidirectional relationship, however I am unclear as to what the best approach to dealing with the child object is.
All the suggestions I have seen indicate that the Parent object should be saved and this should then cascade to the child. It is also suggested that this approach should be followed for all cascading types. So a change to the child is persisted is via the parent.
I'm not disputing this approach however, seeing as there is already an implementation, is it viable to have cascading from the child to the parent? I would imagine that this should be applicable to Merge and Refresh? This seems a viable approach for synchronizing the persistent context with the changes to the object, without needing to change the underlying implementation, beyond adding the @ManyToOne annotations.
I would welcome any comment or recommendations.
java jpa orm persistence eclipselink
I have the following scenario.
We have two objects in our domain which form a parent and child relationship.
Person is the parent and Role is the child. These two objects extend from the same ancestor.
Instead of modelling the relationship as bi directional the child contains only the object Id of the parent represented as a long.
The parent has a @oneToMany mapping to the child.
The issue we are experiencing is as follows: The domain layer currently creates and persists the child in isolation and then simply updates the id of the parent onto itself.
The problem with this approach however is that the Person that is already loaded into the Persistent Context, does not get refeshed with this new Role. This approach is causing havoc in our application as we attempt to orchastrate several operations within a single transaction. We can't leverage 2nd level caching either. We are looking to create a bidirectional relationship, however I am unclear as to what the best approach to dealing with the child object is.
All the suggestions I have seen indicate that the Parent object should be saved and this should then cascade to the child. It is also suggested that this approach should be followed for all cascading types. So a change to the child is persisted is via the parent.
I'm not disputing this approach however, seeing as there is already an implementation, is it viable to have cascading from the child to the parent? I would imagine that this should be applicable to Merge and Refresh? This seems a viable approach for synchronizing the persistent context with the changes to the object, without needing to change the underlying implementation, beyond adding the @ManyToOne annotations.
I would welcome any comment or recommendations.
java jpa orm persistence eclipselink
java jpa orm persistence eclipselink
edited Nov 28 '15 at 10:53
Vlad Mihalcea
57.3k13156458
57.3k13156458
asked Nov 28 '15 at 10:45
Garrick van SchalkwykGarrick van Schalkwyk
6
6
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
No, cascading from the Child to the Parent is not a good idea at all.
I suggest you take the other approach and having the association from the Child to the Parent, suing a @ManyToOne
association. The @ManyToOne
relationship is the most natural association since it follows the FK approach taken by the RDBMS.
Since you already use a @OneToMany
association, you just have to turn that into a mappedBy
one and add a cascade from the Parent to the Child. This approach allows you to save the Child in isolation. The only thing you need to be careful with is to synchronize both sides if the EntityManager
has loaded both the Parent and the Child. But if you only load the Child without fetching the Parent, you can simply operate with the Child alone (e.g. setting the Parent to null
).
Thank you! Would you need to refresh both the L1 and L2 (if in use) or would a refresh on the L1 cache also refresh the L2 cache?
– Garrick van Schalkwyk
Nov 28 '15 at 11:36
EclipseLink uses L2 cache by default, but I'm not sure if it does so for Collections too. If you don't use Collection cache, there is no need to refresh the Parent.
– Vlad Mihalcea
Nov 28 '15 at 11:45
add a comment |
As you're guessing, and as the Javadoc says, cascade
operations that must be cascaded to the target of the association". However, be sure you understand that the mappedBy
defines the owning entity of the relationship. The owning entity is the entity that actually does the persisting operations, unless overridden by a cascade setting. In this case Child
is the owning entity.
@Entity
public class Parent {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@OneToMany(mappedBy="parent")
private Set<Child> children;
The cascade setting on the Parent
works when you create a Set
of children and set it into the Parent
and then save the Parent
. Then the save operation will cascade from the Parent
to the children
. This is a more typical and the expected use case of a cascade
setting. However, it does cause database operations to happen auto-magically and this is not always a good thing.
A Cascade setting on the child will happen when the child is persisted, so you could put a cascade
annotation there, but read on ...
@Entity
public class Child {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@ManyToOne(cascade=CascadeType.ALL)
private Parent parent;
You will persist both the parent and the child by persisting the child.
tx.begin();
Parent p = new Parent();
Child c = new Child();
c.setParent(p);
em.persist(c);
tx.commit();
and when you delete the child it will delete both the parent and the child.
tx.begin();
Child cFound = em.find(Child.class, 1L);
em.remove(cFound);
tx.commit();
em.clear();
this is where you have problems. What happens if you have more than one child?
em.clear();
tx.begin();
p = new Parent();
Child c1 = new Child();
Child c2 = new Child();
c1.setParent(p);
c2.setParent(p);
em.persist(c1);
em.persist(c2);
tx.commit();
All well and nice until you delete one of the children
em.clear();
tx.begin();
cFound = em.find(Child.class, 2L);
em.remove(cFound);
tx.commit();
then you will get an integrity constraint violation
when the cascade propagates to the Parent
but there is still a second Child
in the database. Sure you could cure it by deleting all the children in a single commit but that's getting kind of messy isn't it?
Conceptually people tend to think that propagation goes from Parent
to Child
and so it is very counterintuitive to have it otherwise. Further, what about a situation where you don't want to delete the author just because the store sold all his or her books? In this case you might be mixing cascade, sometimes from child to parent and in other cases from parent to child.
Generally I think it is better to be very precise in your database code. It's much easier to read, understand, and maintain code that specifically saves the parent first then the child or children than to have an annotation somewhere else that I may or may not be aware of that is doing additional database operations implicitly.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f33970435%2fusing-cascading-from-child-to-parent-objects-using-jpa%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
No, cascading from the Child to the Parent is not a good idea at all.
I suggest you take the other approach and having the association from the Child to the Parent, suing a @ManyToOne
association. The @ManyToOne
relationship is the most natural association since it follows the FK approach taken by the RDBMS.
Since you already use a @OneToMany
association, you just have to turn that into a mappedBy
one and add a cascade from the Parent to the Child. This approach allows you to save the Child in isolation. The only thing you need to be careful with is to synchronize both sides if the EntityManager
has loaded both the Parent and the Child. But if you only load the Child without fetching the Parent, you can simply operate with the Child alone (e.g. setting the Parent to null
).
Thank you! Would you need to refresh both the L1 and L2 (if in use) or would a refresh on the L1 cache also refresh the L2 cache?
– Garrick van Schalkwyk
Nov 28 '15 at 11:36
EclipseLink uses L2 cache by default, but I'm not sure if it does so for Collections too. If you don't use Collection cache, there is no need to refresh the Parent.
– Vlad Mihalcea
Nov 28 '15 at 11:45
add a comment |
No, cascading from the Child to the Parent is not a good idea at all.
I suggest you take the other approach and having the association from the Child to the Parent, suing a @ManyToOne
association. The @ManyToOne
relationship is the most natural association since it follows the FK approach taken by the RDBMS.
Since you already use a @OneToMany
association, you just have to turn that into a mappedBy
one and add a cascade from the Parent to the Child. This approach allows you to save the Child in isolation. The only thing you need to be careful with is to synchronize both sides if the EntityManager
has loaded both the Parent and the Child. But if you only load the Child without fetching the Parent, you can simply operate with the Child alone (e.g. setting the Parent to null
).
Thank you! Would you need to refresh both the L1 and L2 (if in use) or would a refresh on the L1 cache also refresh the L2 cache?
– Garrick van Schalkwyk
Nov 28 '15 at 11:36
EclipseLink uses L2 cache by default, but I'm not sure if it does so for Collections too. If you don't use Collection cache, there is no need to refresh the Parent.
– Vlad Mihalcea
Nov 28 '15 at 11:45
add a comment |
No, cascading from the Child to the Parent is not a good idea at all.
I suggest you take the other approach and having the association from the Child to the Parent, suing a @ManyToOne
association. The @ManyToOne
relationship is the most natural association since it follows the FK approach taken by the RDBMS.
Since you already use a @OneToMany
association, you just have to turn that into a mappedBy
one and add a cascade from the Parent to the Child. This approach allows you to save the Child in isolation. The only thing you need to be careful with is to synchronize both sides if the EntityManager
has loaded both the Parent and the Child. But if you only load the Child without fetching the Parent, you can simply operate with the Child alone (e.g. setting the Parent to null
).
No, cascading from the Child to the Parent is not a good idea at all.
I suggest you take the other approach and having the association from the Child to the Parent, suing a @ManyToOne
association. The @ManyToOne
relationship is the most natural association since it follows the FK approach taken by the RDBMS.
Since you already use a @OneToMany
association, you just have to turn that into a mappedBy
one and add a cascade from the Parent to the Child. This approach allows you to save the Child in isolation. The only thing you need to be careful with is to synchronize both sides if the EntityManager
has loaded both the Parent and the Child. But if you only load the Child without fetching the Parent, you can simply operate with the Child alone (e.g. setting the Parent to null
).
answered Nov 28 '15 at 11:00
Vlad MihalceaVlad Mihalcea
57.3k13156458
57.3k13156458
Thank you! Would you need to refresh both the L1 and L2 (if in use) or would a refresh on the L1 cache also refresh the L2 cache?
– Garrick van Schalkwyk
Nov 28 '15 at 11:36
EclipseLink uses L2 cache by default, but I'm not sure if it does so for Collections too. If you don't use Collection cache, there is no need to refresh the Parent.
– Vlad Mihalcea
Nov 28 '15 at 11:45
add a comment |
Thank you! Would you need to refresh both the L1 and L2 (if in use) or would a refresh on the L1 cache also refresh the L2 cache?
– Garrick van Schalkwyk
Nov 28 '15 at 11:36
EclipseLink uses L2 cache by default, but I'm not sure if it does so for Collections too. If you don't use Collection cache, there is no need to refresh the Parent.
– Vlad Mihalcea
Nov 28 '15 at 11:45
Thank you! Would you need to refresh both the L1 and L2 (if in use) or would a refresh on the L1 cache also refresh the L2 cache?
– Garrick van Schalkwyk
Nov 28 '15 at 11:36
Thank you! Would you need to refresh both the L1 and L2 (if in use) or would a refresh on the L1 cache also refresh the L2 cache?
– Garrick van Schalkwyk
Nov 28 '15 at 11:36
EclipseLink uses L2 cache by default, but I'm not sure if it does so for Collections too. If you don't use Collection cache, there is no need to refresh the Parent.
– Vlad Mihalcea
Nov 28 '15 at 11:45
EclipseLink uses L2 cache by default, but I'm not sure if it does so for Collections too. If you don't use Collection cache, there is no need to refresh the Parent.
– Vlad Mihalcea
Nov 28 '15 at 11:45
add a comment |
As you're guessing, and as the Javadoc says, cascade
operations that must be cascaded to the target of the association". However, be sure you understand that the mappedBy
defines the owning entity of the relationship. The owning entity is the entity that actually does the persisting operations, unless overridden by a cascade setting. In this case Child
is the owning entity.
@Entity
public class Parent {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@OneToMany(mappedBy="parent")
private Set<Child> children;
The cascade setting on the Parent
works when you create a Set
of children and set it into the Parent
and then save the Parent
. Then the save operation will cascade from the Parent
to the children
. This is a more typical and the expected use case of a cascade
setting. However, it does cause database operations to happen auto-magically and this is not always a good thing.
A Cascade setting on the child will happen when the child is persisted, so you could put a cascade
annotation there, but read on ...
@Entity
public class Child {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@ManyToOne(cascade=CascadeType.ALL)
private Parent parent;
You will persist both the parent and the child by persisting the child.
tx.begin();
Parent p = new Parent();
Child c = new Child();
c.setParent(p);
em.persist(c);
tx.commit();
and when you delete the child it will delete both the parent and the child.
tx.begin();
Child cFound = em.find(Child.class, 1L);
em.remove(cFound);
tx.commit();
em.clear();
this is where you have problems. What happens if you have more than one child?
em.clear();
tx.begin();
p = new Parent();
Child c1 = new Child();
Child c2 = new Child();
c1.setParent(p);
c2.setParent(p);
em.persist(c1);
em.persist(c2);
tx.commit();
All well and nice until you delete one of the children
em.clear();
tx.begin();
cFound = em.find(Child.class, 2L);
em.remove(cFound);
tx.commit();
then you will get an integrity constraint violation
when the cascade propagates to the Parent
but there is still a second Child
in the database. Sure you could cure it by deleting all the children in a single commit but that's getting kind of messy isn't it?
Conceptually people tend to think that propagation goes from Parent
to Child
and so it is very counterintuitive to have it otherwise. Further, what about a situation where you don't want to delete the author just because the store sold all his or her books? In this case you might be mixing cascade, sometimes from child to parent and in other cases from parent to child.
Generally I think it is better to be very precise in your database code. It's much easier to read, understand, and maintain code that specifically saves the parent first then the child or children than to have an annotation somewhere else that I may or may not be aware of that is doing additional database operations implicitly.
add a comment |
As you're guessing, and as the Javadoc says, cascade
operations that must be cascaded to the target of the association". However, be sure you understand that the mappedBy
defines the owning entity of the relationship. The owning entity is the entity that actually does the persisting operations, unless overridden by a cascade setting. In this case Child
is the owning entity.
@Entity
public class Parent {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@OneToMany(mappedBy="parent")
private Set<Child> children;
The cascade setting on the Parent
works when you create a Set
of children and set it into the Parent
and then save the Parent
. Then the save operation will cascade from the Parent
to the children
. This is a more typical and the expected use case of a cascade
setting. However, it does cause database operations to happen auto-magically and this is not always a good thing.
A Cascade setting on the child will happen when the child is persisted, so you could put a cascade
annotation there, but read on ...
@Entity
public class Child {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@ManyToOne(cascade=CascadeType.ALL)
private Parent parent;
You will persist both the parent and the child by persisting the child.
tx.begin();
Parent p = new Parent();
Child c = new Child();
c.setParent(p);
em.persist(c);
tx.commit();
and when you delete the child it will delete both the parent and the child.
tx.begin();
Child cFound = em.find(Child.class, 1L);
em.remove(cFound);
tx.commit();
em.clear();
this is where you have problems. What happens if you have more than one child?
em.clear();
tx.begin();
p = new Parent();
Child c1 = new Child();
Child c2 = new Child();
c1.setParent(p);
c2.setParent(p);
em.persist(c1);
em.persist(c2);
tx.commit();
All well and nice until you delete one of the children
em.clear();
tx.begin();
cFound = em.find(Child.class, 2L);
em.remove(cFound);
tx.commit();
then you will get an integrity constraint violation
when the cascade propagates to the Parent
but there is still a second Child
in the database. Sure you could cure it by deleting all the children in a single commit but that's getting kind of messy isn't it?
Conceptually people tend to think that propagation goes from Parent
to Child
and so it is very counterintuitive to have it otherwise. Further, what about a situation where you don't want to delete the author just because the store sold all his or her books? In this case you might be mixing cascade, sometimes from child to parent and in other cases from parent to child.
Generally I think it is better to be very precise in your database code. It's much easier to read, understand, and maintain code that specifically saves the parent first then the child or children than to have an annotation somewhere else that I may or may not be aware of that is doing additional database operations implicitly.
add a comment |
As you're guessing, and as the Javadoc says, cascade
operations that must be cascaded to the target of the association". However, be sure you understand that the mappedBy
defines the owning entity of the relationship. The owning entity is the entity that actually does the persisting operations, unless overridden by a cascade setting. In this case Child
is the owning entity.
@Entity
public class Parent {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@OneToMany(mappedBy="parent")
private Set<Child> children;
The cascade setting on the Parent
works when you create a Set
of children and set it into the Parent
and then save the Parent
. Then the save operation will cascade from the Parent
to the children
. This is a more typical and the expected use case of a cascade
setting. However, it does cause database operations to happen auto-magically and this is not always a good thing.
A Cascade setting on the child will happen when the child is persisted, so you could put a cascade
annotation there, but read on ...
@Entity
public class Child {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@ManyToOne(cascade=CascadeType.ALL)
private Parent parent;
You will persist both the parent and the child by persisting the child.
tx.begin();
Parent p = new Parent();
Child c = new Child();
c.setParent(p);
em.persist(c);
tx.commit();
and when you delete the child it will delete both the parent and the child.
tx.begin();
Child cFound = em.find(Child.class, 1L);
em.remove(cFound);
tx.commit();
em.clear();
this is where you have problems. What happens if you have more than one child?
em.clear();
tx.begin();
p = new Parent();
Child c1 = new Child();
Child c2 = new Child();
c1.setParent(p);
c2.setParent(p);
em.persist(c1);
em.persist(c2);
tx.commit();
All well and nice until you delete one of the children
em.clear();
tx.begin();
cFound = em.find(Child.class, 2L);
em.remove(cFound);
tx.commit();
then you will get an integrity constraint violation
when the cascade propagates to the Parent
but there is still a second Child
in the database. Sure you could cure it by deleting all the children in a single commit but that's getting kind of messy isn't it?
Conceptually people tend to think that propagation goes from Parent
to Child
and so it is very counterintuitive to have it otherwise. Further, what about a situation where you don't want to delete the author just because the store sold all his or her books? In this case you might be mixing cascade, sometimes from child to parent and in other cases from parent to child.
Generally I think it is better to be very precise in your database code. It's much easier to read, understand, and maintain code that specifically saves the parent first then the child or children than to have an annotation somewhere else that I may or may not be aware of that is doing additional database operations implicitly.
As you're guessing, and as the Javadoc says, cascade
operations that must be cascaded to the target of the association". However, be sure you understand that the mappedBy
defines the owning entity of the relationship. The owning entity is the entity that actually does the persisting operations, unless overridden by a cascade setting. In this case Child
is the owning entity.
@Entity
public class Parent {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@OneToMany(mappedBy="parent")
private Set<Child> children;
The cascade setting on the Parent
works when you create a Set
of children and set it into the Parent
and then save the Parent
. Then the save operation will cascade from the Parent
to the children
. This is a more typical and the expected use case of a cascade
setting. However, it does cause database operations to happen auto-magically and this is not always a good thing.
A Cascade setting on the child will happen when the child is persisted, so you could put a cascade
annotation there, but read on ...
@Entity
public class Child {
@Id @GeneratedValue(strategy=GenerationType.IDENTITY)
private Long id;
@ManyToOne(cascade=CascadeType.ALL)
private Parent parent;
You will persist both the parent and the child by persisting the child.
tx.begin();
Parent p = new Parent();
Child c = new Child();
c.setParent(p);
em.persist(c);
tx.commit();
and when you delete the child it will delete both the parent and the child.
tx.begin();
Child cFound = em.find(Child.class, 1L);
em.remove(cFound);
tx.commit();
em.clear();
this is where you have problems. What happens if you have more than one child?
em.clear();
tx.begin();
p = new Parent();
Child c1 = new Child();
Child c2 = new Child();
c1.setParent(p);
c2.setParent(p);
em.persist(c1);
em.persist(c2);
tx.commit();
All well and nice until you delete one of the children
em.clear();
tx.begin();
cFound = em.find(Child.class, 2L);
em.remove(cFound);
tx.commit();
then you will get an integrity constraint violation
when the cascade propagates to the Parent
but there is still a second Child
in the database. Sure you could cure it by deleting all the children in a single commit but that's getting kind of messy isn't it?
Conceptually people tend to think that propagation goes from Parent
to Child
and so it is very counterintuitive to have it otherwise. Further, what about a situation where you don't want to delete the author just because the store sold all his or her books? In this case you might be mixing cascade, sometimes from child to parent and in other cases from parent to child.
Generally I think it is better to be very precise in your database code. It's much easier to read, understand, and maintain code that specifically saves the parent first then the child or children than to have an annotation somewhere else that I may or may not be aware of that is doing additional database operations implicitly.
edited Nov 13 '18 at 18:05
answered Nov 13 '18 at 17:48
K.NicholasK.Nicholas
5,29932339
5,29932339
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f33970435%2fusing-cascading-from-child-to-parent-objects-using-jpa%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown